Taming the Chaos in Neural Network Time Series Predictions

被引:5
作者
Raubitzek, Sebastian [1 ]
Neubauer, Thomas [1 ]
机构
[1] TU Wien, Fac Informat, Inst Informat Syst Engn, Informat & Software Engn Grp, Favoritenstr 9-11-194, A-1040 Vienna, Austria
关键词
Hurst exponent; chaos; Lyapunov exponents; neural networks; time series prediction; deep learning; machine learning; LSTM; R/S analysis; ITERATED FUNCTION SYSTEMS; SUPPORT VECTOR MACHINE; CONSTRUCTION; ENSEMBLE; ENTROPY;
D O I
10.3390/e23111424
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
Machine learning methods, such as Long Short-Term Memory (LSTM) neural networks can predict real-life time series data. Here, we present a new approach to predict time series data combining interpolation techniques, randomly parameterized LSTM neural networks and measures of signal complexity, which we will refer to as complexity measures throughout this research. First, we interpolate the time series data under study. Next, we predict the time series data using an ensemble of randomly parameterized LSTM neural networks. Finally, we filter the ensemble prediction based on the original data complexity to improve the predictability, i.e., we keep only predictions with a complexity close to that of the training data. We test the proposed approach on five different univariate time series data. We use linear and fractal interpolation to increase the amount of data. We tested five different complexity measures for the ensemble filters for time series data, i.e., the Hurst exponent, Shannon's entropy, Fisher's information, SVD entropy, and the spectrum of Lyapunov exponents. Our results show that the interpolated predictions consistently outperformed the non-interpolated ones. The best ensemble predictions always beat a baseline prediction based on a neural network with only a single hidden LSTM, gated recurrent unit (GRU) or simple recurrent neural network (RNN) layer. The complexity filters can reduce the error of a random ensemble prediction by a factor of 10. Further, because we use randomly parameterized neural networks, no hyperparameter tuning is required. We prove this method useful for real-time time series prediction because the optimization of hyperparameters, which is usually very costly and time-intensive, can be circumvented with the presented approach.
引用
收藏
页数:37
相关论文
共 53 条
  • [1] Long short-term memory
    Hochreiter, S
    Schmidhuber, J
    [J]. NEURAL COMPUTATION, 1997, 9 (08) : 1735 - 1780
  • [2] [Anonymous], 2001, EUNITE NETWORK COMPE
  • [3] ITERATED FUNCTION SYSTEMS AND THE GLOBAL CONSTRUCTION OF FRACTALS
    BARNSLEY, MF
    DEMKO, S
    [J]. PROCEEDINGS OF THE ROYAL SOCIETY OF LONDON SERIES A-MATHEMATICAL PHYSICAL AND ENGINEERING SCIENCES, 1985, 399 (1817): : 243 - 275
  • [4] Support vector machine with adaptive parameters in financial time series forecasting
    Cao, LJ
    Tay, FEH
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 2003, 14 (06): : 1506 - 1518
  • [6] Hybrid intelligent systems for time series prediction using neural networks, fuzzy logic, and fractal theory
    Castillo, O
    Melin, P
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 2002, 13 (06): : 1395 - 1408
  • [7] Wind speed forecasting using nonlinear-learning ensemble of deep learning time series prediction and extremal optimization
    Chen, Jie
    Zeng, Guo-Qiang
    Zhou, Wuneng
    Du, Wei
    Lu, Kang-Di
    [J]. ENERGY CONVERSION AND MANAGEMENT, 2018, 165 : 681 - 695
  • [8] Cho K., 2014, P C EMP METH NAT LAN, P1724, DOI DOI 10.3115/V1/D14-1179
  • [9] Astronomical time-series analysis -: II.: A search for periodicity using the Shannon entropy
    Cincotta, PM
    Helmi, A
    Méndez, M
    Núñez, JA
    Vucetich, H
    [J]. MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, 1999, 302 (03) : 582 - 586
  • [10] RECURRENT NEURAL NETWORKS AND ROBUST TIME-SERIES PREDICTION
    CONNOR, JT
    MARTIN, RD
    ATLAS, LE
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 1994, 5 (02): : 240 - 254