Volatility persistence is a stylized statistical property of financial time-series data such as exchange rates and stock returns. The purpose of this letter is to investigate the relationship between volatility persistence and predictability of squared returns.
This paper points out that the ARMA models followed by GARCH squares are volatile and gives explicit and general forms of their dependent and volatile innovations. The volatility function of the ARMA innovations is shown to be the square of the corresponding GARCH volatility function. The prediction of GARCH squares is facilitated by the ARMA structure and predictive intervals are considered. Further, the developments suggest families of volatile ARMA processes.
The summary of research activities concerning general theory and methodology performed in Poland in the period of 2015–2018 is presented as a national report for the 27th IUGG (International Union of Geodesy and Geophysics) General Assembly. It contains the results of research on new or improved methods and variants of robust parameter estimation and their application, especially to control network analysis. Reliability analysis of the observation system and an integrated adjustment approach are also given. The identifiability (ID) index as a new measure for minimal detectable bias (MDB) in the observation system of a network, has been introduced. A new method of covariance function parameter estimation in the least squares collocation has been developed. The robustified version of the Shift-Msplit estimation, termed as Shift-M*split estimation, which enables estimation of parameter differences (robustly), without the need of prior estimation of the parameters, has been introduced. Results on the analysis of geodetic time series, particularly Earth orientation parameter time series, geocenter time series, permanent station coordinates and sea level variation time series are also provided in this review paper. The entire bibliography of related works is provided in the references.
The article presents results of the influence of the GMDH (Group Method of Data Handling) neural network input data preparation method on the results of predicting corrections for the Polish timescale UTC(PL). Prediction of corrections was carried out using two methods, time series analysis and regression. As appropriate to these methods, the input data was prepared based on two time series, ts1 and ts2. The implemented research concerned the designation of the prediction errors on certain days of the forecast and the influence of the quantity of data on the prediction error. The obtained results indicate that in the case of the GMDH neural network the best quality of forecasting for UTC(PL) can be obtained using the time-series analysis method. The prediction errors obtained did not exceed the value of ± 8 ns, which confirms the possibility of maintaining the Polish timescale at a high level of compliance with the UTC.
When observations are autocorrelated, standard formulae for the estimators of variance, s2, and variance of the mean, s2 (x), are no longer adequate. They should be replaced by suitably defined estimators, s2a and s2a (x), which are unbiased given that the autocorrelation function is known. The formula for s2a was given by Bayley and Hammersley in 1946, this work provides its simple derivation. The quantity named effective number of observations neff is thoroughly discussed. It replaces the real number of observations n when describing the relationship between the variance and variance of the mean, and can be used to express s2a and s2a (x) in a simple manner. The dispersion of both estimators depends on another effective number called the effective degrees of freedom Veff. Most of the formulae discussed in this paper are scattered throughout the literature and not very well known, this work aims to promote their more widespread use. The presented algorithms represent a natural extension of the GUM formulation of type-A uncertainty for the case of autocorrelated observations.
The paper presents local dynamic approach to integration of an ensemble of predictors. The classical fusing of many predictor results takes into account all units and takes the weighted average of the results of all units forming the ensemble. This paper proposes different approach. The prediction of time series for the next day is done here by only one member of an ensemble, which was the best in the learning stage for the input vector, closest to the input data actually applied. Thanks to such arrangement we avoid the situation in which the worst unit reduces the accuracy of the whole ensemble. This way we obtain an increased level of statistical forecasting accuracy, since each task is performed by the best suited predictor. Moreover, such arrangement of integration allows for using units of very different quality without decreasing the quality of final prediction. The numerical experiments performed for forecasting the next input, the average PM10 pollution and forecasting the 24-element vector of hourly load of the power system have confirmed the superiority of the presented approach. All quality measures of forecast have been significantly improved.
Describing the gas boiler fuel consumption as a time series gives the opportunity to use tools appropriate for the processing of such data to analyze this phenomenon. One of them are ARIMA models. The article proposes this type of model to be used for predicting monthly gas consumption in a boiler room working for heating and hot water preparation. The boiler supplies heat to a group of residential buildings. Based on the collected data, three specific models were selected for which the forecast accuracy was assessed. Calculations and analyses were carried out in the R environment using “forecast” and “ggplot2” packages. A good quality of the obtained forecasts has been demonstrated, confirming the usefulness of the proposed analytical tools. The article summary also indicates for what purposes the forecasts obtained in this way can be used. They can be useful for diagnosing the correct operation of a heat source. Registering fuel consumption at a level significantly deviating from the forecast should be a signal to immediately diagnose the boiler room and the heat supply system and to explain the reason for this difference. In this way, it is possible to detect irregularities in the operation of the heat supply system before they are detected by traditional methods. The gas consumption forecast is also useful for optimizing the financial management of the property manager responsible for the operation of the boiler room. On this basis, operating fees or financial operations with the use of periodic surplus capital may be planned.
Position time series from permanent Global Navigation Satellite System (GNSS) stations are commonly used for estimating secular velocities of discrete points on the Earth’s surface. An understanding of background noise in the GNSS position time series is essential to obtain realistic estimates of velocity uncertainties. The current study focuses on the investigation of background noise in position time series obtained from thirteen permanent GNSS stations located in Nepal Himalaya using the spectral analysis method. The power spectrum of the GNSS position time series has been estimated using the Lomb–Scargle method. The iterative nonlinear Levenberg–Marquardt (LM) algorithm has been applied to estimate the spectral index of the power spectrum. The power spectrum can be described by white noise in the high frequency zone and power law noise in the lower frequency zone. The mean and the standard deviation of the estimated spectral indices are −1.46±0.14,−1.39±0.16 and −1.53±0.07 for north, east and vertical components, respectively. On average, the power law noise extends up to a period of ca. 21 days. For a shorter period, i.e. less than ca. 21 days, the spectra are white. The spectral index corresponding to random walk noise (ca. –2) is obtained for a site located above the base of a seismogenic zone which can be due to the combined effect of tectonic and nontectonic factors rather than a spurious monumental motion. Overall, the usefulness of investigating the background noise in the GNSS position time series is discussed.
The paper proposes an adaptation of mathematical models derived from the theory of deterministic chaos to short-term power forecasts of wind turbines. The operation of wind power plants and the generated power depend mainly on the wind speed at a given location. It is a stochastic process dependent on many factors and very difficult to predict. Classical forecasting models are often unable to find the existing relationships between the factors influencing wind power output. Therefore, we decided to refer to fractal geometry. Two models based on self-similar processes (M-CO) and (M-COP) and the (M-HUR) model were built. The accuracy of these models was compared with other short-term forecasting models. The modified model of power curve adjusted to local conditions (M-PC) and Canonical Distribution of the Vector of Random Variables Model (CDVRM). Examples of applications confirm the valuable properties of the proposed approaches.
Drinking water systems are critical to society. They protect residents from waterborne illnesses and encourage economic success of businesses by providing consistent water supplies to industries and supporting a healthy work force. This paper shows a study on water quality management in a treatment plant (TP) using the Box-Jenkins method. A comparative analysis was carried out between concentrations of water quality parameters, and Colombian legislation and guidelines established by the World Health Organization. We also studied the rainfall influence in relation to variations in water quality supplied by the TP. A correlation analysis between water quality parameters was carried out to identify management parameters during the TP operation. Results showed the usefulness of the Box-Jenkins method for analyzing the TP operation from a weekly timescale (mediumterm), and not from a daily timescale (short-term). This was probably due to significant daily variations in the management parameters of water quality in the TP. The application of a weekly moving average transformation to the daily time series of water quality parameter concentrations significantly decreased the mean absolute percentage error in the forecasts of Box-Jenkins models developed. Box-Jenkins analysis suggested an influence of the water quality parameter concentrations observed in the TP during previous weeks (between 2-3 weeks). This study was probably constituted as a medium-term planning tool in relation to atypical events or contingencies observed during the TP operation. Finally, the findings in this study will be useful for companies or designers of drinking water treatment systems to take operational decisions within the public health framework.
We study the autocovariance structure of a general Markov switching second-order stationary VARMA model.Then we give stable finite order VARMA(p*, q*) representations for those M-state Markov switching VARMA(p, q) processes where the observables are uncorrelated with the regime variables. This allows us to obtain sharper bounds for p* and q* with respect to the ones existing in literature. Our results provide new insights into stochastic properties and facilitate statistical inference about the orders of MS-VARMA models and the underlying number of hidden states.
In economics we often face a system which intrinsically imposes a structure of hierarchy of its components, i.e., in modeling trade accounts related to foreign exchange or in optimization of regional air protection policy. A problem of reconciliation of forecasts obtained on different levels of hierarchy has been addressed in the statistical and econometric literature many times and concerns bringing together forecasts obtained independently at different levels of hierarchy. This paper deals with this issue with regard to a hierarchical functional time series. We present and critically discuss a state of art and indicate opportunities of an application of these methods to a certain environment protection problem. We critically compare the best predictor known from the literature with our own original proposal. Within the paper we study a macromodel describing the day and night air pollution in Silesia region divided into five subregions.
The aim of the article is to construct an asymptotically consistent test, based on a subsampling approach, to verify hypothesis about existence of the individual or common deterministic cycle in coordinates of multivariate macroeconomic time series. By the deterministic cycle we mean the periodic or almost periodic fluctuations in the mean function in cyclical fluctuations. To construct test we formulate a multivariate non-parametric model containing the business cycle component in the unconditional mean function. The construction relies on the Fourier representation of the unconditional expectation of the multivariate Almost Periodically Correlated time series and is related to fixed deterministic cycle presented in the literature. The analysis of the existence of common deterministic business cycles for selected European countries is presented based on monthly industrial production indexes. Our main findings from the empirical part is that the deterministic cycle can be strongly supported by the data and therefore should not be automatically neglected during analysis without justification.
Prior knowledge of the autocorrelation function (ACF) enables an application of analytical formalism for the unbiased estimators of variance s2a and variance of the mean s2a(xmacr;). Both can be expressed with the use of so-called effective number of observations neff. We show how to adopt this formalism if only an estimate {rk} of the ACF derived from a sample is available. A novel method is introduced based on truncation of the {rk} function at the point of its first transit through zero (FTZ). It can be applied to non-negative ACFs with a correlation range smaller than the sample size. Contrary to the other methods described in literature, the FTZ method assures the finite range 1 < neff ≤ n for any data. The effect of replacement of the standard estimator of the ACF by three alternative estimators is also investigated. Monte Carlo simulations, concerning the bias and dispersion of resulting estimators sa and sa(×), suggest that the presented formalism can be effectively used to determine a measurement uncertainty. The described method is illustrated with the exemplary analysis of autocorrelated variations of the intensity of an X-ray beam diffracted from a powder sample, known as the particle statistics effect.
Securing the certainty of supplies of the necessary minimum energy in each country is a basic condition for the energy security of the state and its citizens. The concept of energy security combines several aspects at the same time, as it can be considered in terms of the availability of own energy resources, it concerns technical aspects related to technical infrastructure, as well as political aspects related to the management and diversification of energy supplies. Another aspect of the issue of energy security is the environmental perspective, which is now becoming a priority in the light of the adopted objectives of the European Union’s energy policy. The restrictive requirements for reducing greenhouse gas emissions and increasing the required level of renewable energy sources in the energy balance of the Member States is becoming a challenge for economies that use fossil fuels to a large extent in the raw material structure, including Poland. Poland is the largest producer of hard coal in the European Union and hard coal is a strategic raw material as it satisfies about 50% of the country’s energy demand. In this context, the main goal of the article was to determine the future sale of hard coal by 2030 in relation to environmental regulations introduced in the energy sector. For this purpose, a mathematical model with a 95% confidence interval was developed using artificial LSTM neural networks, which belong to deep learning machine learning techniques, which reflects the key relationships between hard coal mining and the assumptions adopted in the National Energy and Climate Plan for the years 2021–2030 (NECP).