Ensemble streamflow forecasting based on variational mode decomposition and long short term memory

被引:20
作者
Sun, Xiaomei [1 ,2 ,3 ,4 ]
Zhang, Haiou [1 ,2 ,3 ,4 ]
Wang, Jian [1 ,2 ,3 ,4 ]
Shi, Chendi [1 ,2 ,3 ,4 ]
Hua, Dongwen [1 ,2 ,3 ,4 ]
Li, Juan [1 ,2 ,3 ,4 ]
机构
[1] Shaanxi Prov Land Engn Construct Grp Co Ltd, Xian 710075, Peoples R China
[2] Shaanxi Prov Land Engn Construct Grp Co Ltd, Inst Land Engn & Technol, Xian 710021, Peoples R China
[3] Shaanxi Prov Land Consolidat Engn Technol Res Ctr, Xian 710075, Peoples R China
[4] Minist Nat Resources, Key Lab Degraded & Unused Land Consolidat Engn, Xian 710075, Peoples R China
关键词
ARTIFICIAL NEURAL-NETWORK; RIVER FLOW; WAVELET; INFLOW; ARMA; EMD; PERFORMANCE; ALGORITHM; SELECTION; MACHINE;
D O I
10.1038/s41598-021-03725-7
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Reliable and accurate streamflow forecasting plays a vital role in the optimal management of water resources. To improve the stability and accuracy of streamflow forecasting, a hybrid decomposition-ensemble model named VMD-LSTM-GBRT, which is sensitive to sampling, noise and long historical changes of streamflow, was established. The variational mode decomposition (VMD) algorithm was first applied to extract features, which were then learned by several long short-term memory (LSTM) networks. Simultaneously, an ensemble tree, a gradient boosting tree for regression (GBRT), was trained to model the relationships between the extracted features and the original streamflow. The outputs of these LSTMs were finally reconstructed by the GBRT model to obtain the forecasting streamflow results. A historical daily streamflow series (from 1/1/1997 to 31/12/2014) for Yangxian station, Han River, China, was investigated by the proposed model. VMD-LSTM-GBRT was compared with respect to three aspects: (1) feature extraction algorithm; ensemble empirical mode decomposition (EEMD) was used. (2) Feature learning techniques; deep neural networks (DNNs) and support vector machines for regression (SVRs) were exploited. (3) Ensemble strategy; the summation strategy was used. The results indicate that the VMD-LSTM-GBRT model overwhelms all other peer models in terms of the root mean square error (RMSE = 36.3692), determination coefficient (R-2 = 0.9890), mean absolute error (MAE = 9.5246) and peak percentage threshold statistics (PPTS(5) = 0.0391%). The addressed approach based on the memory of long historical changes with deep feature representations had good stability and high prediction precision.
引用
收藏
页数:19
相关论文
共 47 条
[1]   Daily reservoir inflow forecasting using multiscale deep feature learning with hybrid models [J].
Bai, Yun ;
Chen, Zhiqiang ;
Xie, Jingjing ;
Li, Chuan .
JOURNAL OF HYDROLOGY, 2016, 532 :193-206
[2]   LEARNING LONG-TERM DEPENDENCIES WITH GRADIENT DESCENT IS DIFFICULT [J].
BENGIO, Y ;
SIMARD, P ;
FRASCONI, P .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1994, 5 (02) :157-166
[3]  
Bergstra J., 2011, P 2011 ANN C NEURAL, V24, DOI DOI 10.5555/2986459.2986743
[4]   Modelling of the monthly and daily behaviour of the runoff of the Xallas river using Box-Jenkins and neural networks methods [J].
Castellano-Méndez, M ;
González-Manteiga, W ;
Febrero-Bande, M ;
Prada-Sánchez, JM ;
Lozano-Calderón, R .
JOURNAL OF HYDROLOGY, 2004, 296 (1-4) :38-58
[5]   A perturbative approach for enhancing the performance of time series forecasting [J].
de Mattos Neto, Paulo S. G. ;
Ferreira, Tiago A. E. ;
Lima, Aranildo R. ;
Vasconcelos, Germano C. ;
Cavalcanti, George D. C. .
NEURAL NETWORKS, 2017, 88 :114-124
[6]   A Hybrid System Based on Dynamic Selection for Time Series Forecasting [J].
de Oliveira, Joao F. L. ;
Silva, Eraylson G. ;
de Mattos Neto, Paulo S. G. .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (08) :3251-3263
[7]   Variational Mode Decomposition [J].
Dragomiretskiy, Konstantin ;
Zosso, Dominique .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2014, 62 (03) :531-544
[8]   Greedy function approximation: A gradient boosting machine [J].
Friedman, JH .
ANNALS OF STATISTICS, 2001, 29 (05) :1189-1232
[9]   Stochastic gradient boosting [J].
Friedman, JH .
COMPUTATIONAL STATISTICS & DATA ANALYSIS, 2002, 38 (04) :367-378
[10]  
Goodfellow I., 2018, Deep Learning, DOI [10.1007/s10710-017-9314-z, DOI 10.1007/S10710-017-9314-Z]