A Correntropy-Based Echo State Network with Application to Time Series Prediction

被引:1
作者
Chen, Xiufang [1 ]
Su, Zhenming [1 ]
Jin, Long [1 ]
Li, Shuai [2 ]
机构
[1] Lanzhou Univ, Sch Informat Sci & Engn, Lanzhou 730000, Peoples R China
[2] Univ Chinese Acad Sci, Hangzhou Inst Adv Study, Hangzhou 310024, Peoples R China
基金
中国国家自然科学基金;
关键词
Training; Incremental learning; Time series analysis; Noise; Echo state networks; Benchmark testing; Prediction algorithms; Robustness; Noise measurement; Optimization; Correntropy; echo state network (ESN); noise; time series prediction; NEURAL-NETWORKS; STABILITY; MACHINE;
D O I
10.1109/JAS.2024.124932
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
As a category of recurrent neural networks, echo state networks (ESNs) have been the topic of in-depth investigations and extensive applications in a diverse array of fields, with spectacular triumphs achieved. Nevertheless, the traditional ESN and the majority of its variants are devised in the light of the second-order statistical information of data (e.g., variance and covariance), while more information is neglected. In the context of information theoretic learning, correntropy demonstrates the capacity to grab more information from data. Therefore, under the guidelines of the maximum correntropy criterion, this paper proposes a correntropy-based echo state network (CESN) in which the first-order and higher-order information of data is captured, promoting robustness to noise. Furthermore, an incremental learning algorithm for the CESN is presented, which has the expertise to update the CESN when new data arrives, eliminating the need to retrain the network from scratch. Finally, experiments on benchmark problems and comparisons with existing works are provided to verify the effectiveness and superiority of the proposed CESN.
引用
收藏
页码:425 / 435
页数:11
相关论文
共 43 条
[1]  
Agarwal RaviP., 2001, CAMB TRACT MATH
[2]  
Boyd Stephen., 2009, Convex optimization, DOI [10.1017/CBO9780511804441, DOI 10.1017/CBO9780511804441]
[3]   Maximum Correntropy Criterion-Based Hierarchical One-Class Classification [J].
Cao, Jiuwen ;
Dai, Haozhen ;
Lei, Baiying ;
Yin, Chun ;
Zeng, Huanqiang ;
Kummert, Anton .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (08) :3748-3754
[4]   Broad Learning System: An Effective and Efficient Incremental Learning System Without the Need for Deep Architecture [J].
Chen, C. L. Philip ;
Liu, Zhulin .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (01) :10-24
[5]   Echo State Network With Probabilistic Regularization for Time Series Prediction [J].
Chen, Xiufang ;
Liu, Mei ;
Li, Shuai .
IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2023, 10 (08) :1743-1753
[6]   A Novel Echo State Network Autoencoder for Anomaly Detection in Industrial IoT Systems [J].
De Vita, Fabrizio ;
Nocera, Giorgio ;
Bruneo, Dario ;
Das, Sajal K. .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2023, 19 (08) :8985-8994
[7]   Pruning and regularization in reservoir computing [J].
Dutoit, X. ;
Schrauwen, B. ;
Van Campenhout, J. ;
Stroobandt, D. ;
Van Brussel, H. ;
Nuttin, M. .
NEUROCOMPUTING, 2009, 72 (7-9) :1534-1546
[8]   Mercer kernel-based clustering in feature space [J].
Girolami, M .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2002, 13 (03) :780-784
[9]   Robust echo state networks based on correntropy induced loss function [J].
Guo, Yu ;
Wang, Fei ;
Chen, Badong ;
Xin, Jingmin .
NEUROCOMPUTING, 2017, 267 :295-303
[10]   Differentiable Automatic Data Augmentation by Proximal Update for Medical Image Segmentation [J].
He, Wenxuan ;
Liu, Min ;
Tang, Yi ;
Liu, Qinghao ;
Wang, Yaonan .
IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2022, 9 (07) :1315-1318