An Improved Double Hidden-Layer Variable Length Incremental Extreme Learning Machine Based on Particle Swarm Optimization

被引:3
作者
Li, Qiuwei [1 ]
Han, Fei [1 ]
Ling, Qinghua [2 ]
机构
[1] Jiangsu Univ, Sch Comp Sci & Commun Engn, Zhenjiang, Jiangsu, Peoples R China
[2] Jiangsu Univ Sci & Technol, Sch Comp Sci & Engn, Zhenjiang 212003, Jiangsu, Peoples R China
来源
INTELLIGENT COMPUTING THEORIES AND APPLICATION, PT II | 2018年 / 10955卷
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Extreme Learning Machine; Particle Swarm Optimization; Feature extraction; Auto-encoder;
D O I
10.1007/978-3-319-95933-7_5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Extreme learning machine (ELM) has been widely used in diverse domains. With the development of deep learning, integrating ELM with some deep learning method has become a new perspective method for extracting and classifications. However, it may require a large number of hidden nodes and lead to the ill-condition problem for its random generation. In this paper, an effective hybrid approach based on Variable-length Incremental ELM and Particle Swarm Optimization (PSO) algorithm (PSO-VIELM) is proposed which can be used to regulate weights and extract features. In the new method, we build two hidden layers to establish a structure which is compact with a better generalization performance. In the first hidden layer named extraction layer, we make the feature learning to the raw data, and make dynamic updates for hidden layer nodes, and use the fitting error as the fitness function to update the weights corresponding to the hidden nodes with the method of PSO. In the second hidden layer named classification layer, we make a classification for the processed data from extraction layer and use cross-entropy as the fitness function to update the weights in the net. In order to find the appropriate number of hidden layer nodes, all hidden nodes will no longer grow in the case of a rebound in the fitness function on the validation set. The result in some datasets shows that PSO-VIELM has a better generalization performance than other constructive ELMs.
引用
收藏
页码:34 / 43
页数:10
相关论文
共 12 条
[1]  
[Anonymous], 2012, COMPUTER SCI
[2]  
Huang GB, 2004, I C CONT AUTOMAT ROB, P1029
[3]  
Huang GB, 2004, IEEE IJCNN, P985
[4]   Extreme learning machine: Theory and applications [J].
Huang, Guang-Bin ;
Zhu, Qin-Yu ;
Siew, Chee-Kheong .
NEUROCOMPUTING, 2006, 70 (1-3) :489-501
[5]   Universal approximation using incremental constructive feedforward networks with random hidden nodes [J].
Huang, Guang-Bin ;
Chen, Lei ;
Siew, Chee-Kheong .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2006, 17 (04) :879-892
[6]  
Kennedy J, 1995, 1995 IEEE INTERNATIONAL CONFERENCE ON NEURAL NETWORKS PROCEEDINGS, VOLS 1-6, P1942, DOI 10.1109/icnn.1995.488968
[7]   OP-ELM: Optimally Pruned Extreme Learning Machine [J].
Miche, Yoan ;
Sorjamaa, Antti ;
Bas, Patrick ;
Simula, Olli ;
Jutten, Christian ;
Lendasse, Amaury .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2010, 21 (01) :158-162
[8]   Generalized extreme learning machine autoencoder and a new deep neural network [J].
Sun, Kai ;
Zhang, Jiangshe ;
Zhang, Chunxia ;
Hu, Junying .
NEUROCOMPUTING, 2017, 230 :374-381
[9]  
Xu Y, 2006, LECT NOTES COMPUT SC, V3971, P644
[10]   Parallel Chaos Search Based Incremental Extreme Learning Machine [J].
Yang, Yimin ;
Wang, Yaonan ;
Yuan, Xiaofang .
NEURAL PROCESSING LETTERS, 2013, 37 (03) :277-301