A multivariate time series prediction model based on subspace echo state network

被引:0
作者
Han, Min [1 ]
Xu, Mei-Ling [1 ]
Wang, Xin-Ying [1 ]
机构
[1] Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116023, Liaoning
来源
Jisuanji Xuebao/Chinese Journal of Computers | 2014年 / 37卷 / 11期
关键词
Echo state network; Fast subspace decomposition; Multivariate time series; Prediction; Reservoir;
D O I
10.3724/SP.J.1016.2014.02268
中图分类号
学科分类号
摘要
Considering that there may be ill-posed solutions in the pseudo-inverse algorithm for using echo state network to predict the multivariate chaotic time series, we introduce a new approach towards ESNs, termed Fast Subspace Decomposition Echo State Network model (FSDESN) herein. The model uses a Krylov subspace decomposition algorithm, on the basis of an efficient Lanczos type iteration, to extract the subspace of large-scale reservoir matrix, and then calculate the output weights by replacing the original state space with its subspace. By this way, it brings a jump on computational complexity compared with conventional eigenvalue decomposition algorithm. In addition, FSDESN model eliminates approximate collinear components so as to solve the ill-posed problem and avoid over-training. Furthermore, it improves the generalization performance of single ESN. Simulation results on basis of two sets of multivariate chaotic time series substantiate the effectiveness and characteristics of FSDESN. ©, 2014, Jisuanji Xuebao/Chinese Journal of Computers. All right reserved.
引用
收藏
页码:2268 / 2275
页数:7
相关论文
共 23 条
[1]  
Cao F.-L., Xu Z.-B., The essential order of approximation for neural network, Science in China Series E, 34, 4, pp. 361-373, (2004)
[2]  
Cao F.L., Zhang Y.Q., He Z.R., Interpolation and rates of convergence for a class of neural networks, Applied Mathematical Modeling, 33, 3, pp. 1441-1456, (2009)
[3]  
Yeh W.C., New parameter-free simplified swarm optimization for artificial neural network training and its application in the prediction of time series, IEEE Transactions on Neural Networks and Learning Systems, 24, 4, pp. 661-665, (2013)
[4]  
Jaeger H., Hass H., Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication, Science, 304, 5667, pp. 78-80, (2004)
[5]  
Wang Y.G., Cao F.L., Yuan Y.B., A study on effectiveness of extreme learning machine, Neurocomputing, 74, 16, pp. 2483-2490, (2011)
[6]  
Lukosevicius M., Jaeger H., Schrauwen B., Reservoir computing trends, Kunstliche Intelligenz, 26, 4, pp. 365-371, (2012)
[7]  
Rodan A., Tino P., Minimum complexity echo state network, IEEE Transactions on Neural Networks, 22, 1, pp. 131-144, (2011)
[8]  
Gallicchio C., Micheli A., Tree echo state networks, Neurocomputing, 101, pp. 319-337, (2013)
[9]  
Chatzis S.P., Demiris Y., Echo state Gaussian process, IEEE Transactions on Neural Networks, 22, 9, pp. 1435-1445, (2011)
[10]  
Deng W.-Y., Zheng Q.-H., Chen L., Xu X.-B., Research on extreme learning of neural networks, Chinese Journal of Computers, 33, 2, pp. 279-287, (2010)