Novel volatility forecasting using deep learning-Long Short Term Memory Recurrent Neural Networks

被引:112
作者
Liu, Yang [1 ]
机构
[1] Bank Amer Merrill Lynch, London EC1A 1HQ, England
关键词
Deep learning; Long Short Term Memory Recurrent Neural; Networks; Support Vector Machines (SVM); Generalized Autoregressive Conditional; Heteroskedasticity (GARCH) model; Volatility forecasting; DIFFERENTIAL EVOLUTION; OPTIMIZATION; ALGORITHM;
D O I
10.1016/j.eswa.2019.04.038
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The volatility is related to financial risk and its prediction accuracy is very important in portfolio optimisation. A large body of literature to-date suggests Support Vector Machines (SVM) as the "best of regression algorithms for financial data regression. Recent work however found that new deep learning-Long Short Term Memory Recurrent Neural Networks (LSTM RNNs) outperformed SVM for classification problems. In the present paper we conduct a new unbiased evaluation of these two modelling techniques for regression problems, and we also compare them with a popular regression model - Generalized Autoregressive Conditional Heteroskedasticity (GARCH) model for financial volatility or risk forecasting. Our experiments using financial data show that the LSTM RNNs performed as good as v-SVR for large interval volatility forecasting and both performed much better than GARCH model for two financial indices (S&P 500 and AAPL). The LSTM RNNS deep learning method can learn from big raw data and can be run with many hidden layers and neurons under GPU to achieve a good prediction for long sequence data compared to the support vector regression. The deep learning technique - LSTM RNNs with big data can be used to improve the volatility prediction instead of v-SVR when the v-SVR does not predict well for some financial stocks of a portfolio. This will help investors to win the competition to maximize their profit. (C) 2019 Elsevier Ltd. All rights reserved.
引用
收藏
页码:99 / 109
页数:11
相关论文
共 26 条
  • [1] [Anonymous], 2015, INT J ADV RES ARTIFI
  • [2] [Anonymous], 2017, KERAS PYTHON DEEP LE
  • [3] Cherkassky V, 1997, IEEE Trans Neural Netw, V8, P1564, DOI 10.1109/TNN.1997.641482
  • [4] Cho K., 2014, P SSST8 8 WORKSH SYN, P103, DOI 10.3115/v1/w14-4012
  • [5] Cruz P.F., 2003, QUANTITATIVE FINANCE, V3, P1
  • [6] Deb A, 2010, P MED MICR S MMS, P17
  • [7] Deb K, 2000, 20000032000 KAN GAL
  • [8] DeepMind, 2017, TENSORFLOW OP SOURC
  • [9] Learning to forget: Continual prediction with LSTM
    Gers, FA
    Schmidhuber, J
    Cummins, F
    [J]. NEURAL COMPUTATION, 2000, 12 (10) : 2451 - 2471
  • [10] Graves A, 2012, STUD COMPUT INTELL, V385, P1, DOI [10.1162/neco.1997.9.1.1, 10.1007/978-3-642-24797-2]