Greedy deep stochastic configuration networks ensemble with boosting negative correlation learning

被引:6
作者
Zhang, Chenglong [1 ,2 ,4 ]
Wang, Yang [3 ]
Zhang, David [1 ,3 ,5 ]
机构
[1] Chinese Univ Hong Kong, Sch Data Sci, Shenzhen 518172, Guangdong, Peoples R China
[2] Univ Sci & Technol China, Sch Informat Sci & Technol, Hefei 230026, Anhui, Peoples R China
[3] Guizhou Univ, State Key Lab Publ Big Data, Guiyang 550025, Peoples R China
[4] Chinese Univ Hong Kong, Sch Data Sci, Shenzhen Key Lab Pattern Anal & Perceptual Comp, Shenzhen 518172, Guangdong, Peoples R China
[5] Chinese Univ Hong Kong, Shenzhen Inst Artificial Intelligence & Robot Soc, Shenzhen 518172, Guangdong, Peoples R China
关键词
Deep stochastic configuration networks; Ensemble learning; Greedy optimization; Negative correlation learning; Boosting; PREDICTION INTERVALS;
D O I
10.1016/j.ins.2024.121140
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep stochastic configuration networks (DSCNs) employ data-dependent supervision mechanism to randomly assign node parameters and incrementally construct the deep neural network structure, thereby ensuring the model's universal approximation property. To build a random neural networks ensemble model with better generalization performance, we propose a novel greedy deep stochastic configuration networks ensemble model based on boosting negative correlation learning, termed as GDSCNE. Firstly, greedy optimization strategy based on inequality constraints is utilized to generate random parameters of base components with multi-layer architecture, which can accelerate the decline of network residuals when configuring a new node. Additionally, boosting negative correlation learning framework is presented for the base components ensemble process, which uses least square approach with negative correlation learning penalty term to update the ensemble output weights for each base component, subsequently, boosting method is applied to construct a stronger ensemble model by adaptive weighting through the results of base components. Finally, we evaluated GDSCNE on the popular regression benchmark datasets from the KEEL, experimental results demonstrate that GDSCNE outperforms state-of-the-art random learning algorithms in terms of regression accuracy and generalization performance across several regression datasets with varying sizes.
引用
收藏
页数:15
相关论文
共 40 条
[1]   Fast decorrelated neural network ensembles with random weights [J].
Alhamdoosh, Monther ;
Wang, Dianhui .
INFORMATION SCIENCES, 2014, 264 :104-117
[2]  
Cecotti H, 2016, IEEE IJCNN, P3628, DOI 10.1109/IJCNN.2016.7727666
[3]   Stochastic configuration network based on improved whale optimization algorithm for nonstationary time series prediction [J].
Chen, Zi-yu ;
Xiao, Fei ;
Wang, Xiao-kang ;
Deng, Min-hui ;
Wang, Jian-qiang ;
Li, Jun-Bo .
JOURNAL OF FORECASTING, 2022, 41 (07) :1458-1482
[4]   Cloud ensemble learning for fault diagnosis of rolling bearings with stochastic configuration networks [J].
Dai, Wei ;
Liu, Jiang ;
Wang, Lanhao .
INFORMATION SCIENCES, 2024, 658
[5]   Hybrid Parallel Stochastic Configuration Networks for Industrial Data Analytics [J].
Dai, Wei ;
Zhou, Xinyu ;
Li, Depeng ;
Zhu, Song ;
Wang, Xuesong .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (04) :2331-2341
[6]  
[丁世飞 Ding Shifei], 2023, [计算机学报, Chinese Journal of Computers], V46, P2476
[7]   Deep stochastic configuration networks with different random sampling strategies [J].
Felicetti, Matthew J. ;
Wang, Dianhui .
INFORMATION SCIENCES, 2022, 607 :819-830
[8]   Deep stochastic configuration networks with optimised model and hyper-parameters [J].
Felicetti, Matthew J. ;
Wang, Dianhui .
INFORMATION SCIENCES, 2022, 600 :431-441
[9]   AdaBoost Regression Algorithm Based on Classification-Type Loss [J].
Gao, Lin ;
Kou, Peng ;
Gao, Feng ;
Guan, Xiaohong .
2010 8TH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION (WCICA), 2010, :682-687
[10]   Approximation with random bases: Pro et Contra [J].
Gorban, Alexander N. ;
Tyukin, Ivan Yu. ;
Prokhorov, Danil V. ;
Sofeikov, Konstantin I. .
INFORMATION SCIENCES, 2016, 364 :129-145