Two-hidden-layer extreme learning machine for regression and classification

被引:90
作者
Qu, B. Y. [1 ,2 ]
Lang, B. F. [1 ]
Liang, J. J. [1 ]
Qin, A. K. [3 ]
Crisalle, O. D. [1 ]
机构
[1] Zhengzhou Univ, Sch Elect Engn, Zhengzhou 450001, Peoples R China
[2] Zhongyuan Univ Technol, Sch Elect & Informat Engn, Zhengzhou 450007, Peoples R China
[3] RMIT Univ, Sch Comp Sci & Informat Technol, Melbourne, Vic 3001, Australia
基金
中国国家自然科学基金;
关键词
Extreme learning machine; Two-hidden-layer; Regression; Classification; Neural network; FEEDFORWARD NEURAL-NETWORK; LANDMARK RECOGNITION; CAPABILITIES; ALGORITHM;
D O I
10.1016/j.neucom.2015.11.009
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As a single-hidden-layer feedforward neural network, an extreme learning machine (ELM) randomizes the weights between the input layer and the hidden layer as well as the bias of hidden neurons, and analytically determines the weights between the hidden layer and the output layer using the least-squares method. This paper proposes a two-hidden-layer ELM (denoted TELM) by introducing a novel method for obtaining the parameters of the second hidden layer (connection weights between the first and second hidden layer and the bias of the second hidden layer), hence bringing the actual hidden layer output closer to the expected hidden layer output in the two-hidden-layer feedforward network. Simultaneously, the TELM method inherits the randomness of the ELM technique for the first hidden layer (connection weights between the input weights and the first hidden layer and the bias of the first hidden layer). Experiments on several regression problems and some popular classification datasets demonstrate that the proposed TELM can consistently outperform the original ELM, as well as some existing multilayer ELM variants, in terms of average accuracy and the number of hidden neurons. (C) 2015 Elsevier B.V. All rights reserved.
引用
收藏
页码:826 / 834
页数:9
相关论文
共 37 条
[1]   Sparse Extreme Learning Machine for Classification [J].
Bai, Zuo ;
Huang, Guang-Bin ;
Wang, Danwei ;
Wang, Han ;
Westover, M. Brandon .
IEEE TRANSACTIONS ON CYBERNETICS, 2014, 44 (10) :1858-1870
[2]   The sample complexity of pattern classification with neural networks: The size of the weights is more important than the size of the network [J].
Bartlett, PL .
IEEE TRANSACTIONS ON INFORMATION THEORY, 1998, 44 (02) :525-536
[3]   Landmark recognition with compact BoW histogram and ensemble ELM [J].
Cao, Jiuwen ;
Chen, Tao ;
Fan, Jiayuan .
MULTIMEDIA TOOLS AND APPLICATIONS, 2016, 75 (05) :2839-2857
[4]   Landmark recognition with sparse representation classification and extreme learning machine [J].
Cao, Jiuwen ;
Zhao, Yanfei ;
Lai, Xiaoping ;
Ong, Marcus Eng Hock ;
Yin, Chun ;
Koh, Zhi Xiong ;
Liu, Nan .
JOURNAL OF THE FRANKLIN INSTITUTE-ENGINEERING AND APPLIED MATHEMATICS, 2015, 352 (10) :4528-4545
[5]   Extreme Learning Machines on High Dimensional and Large Data Applications: A Survey [J].
Cao, Jiuwen ;
Lin, Zhiping .
MATHEMATICAL PROBLEMS IN ENGINEERING, 2015, 2015
[6]   Voting based extreme learning machine [J].
Cao, Jiuwen ;
Lin, Zhiping ;
Huang, Guang-Bin ;
Liu, Nan .
INFORMATION SCIENCES, 2012, 185 (01) :66-77
[7]  
Cheng S, 2009, J XIAN JIAOTONG U, V2, P29
[8]  
Deng Wan-Yu, 2010, Chinese Journal of Computers, V33, P279, DOI 10.3724/SP.J.1016.2010.00279
[9]   Regularized Extreme Learning Machine [J].
Deng, Wanyu ;
Zheng, Qinghua ;
Chen, Lin .
2009 IEEE SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DATA MINING, 2009, :389-395
[10]   TRAINING FEEDFORWARD NETWORKS WITH THE MARQUARDT ALGORITHM [J].
HAGAN, MT ;
MENHAJ, MB .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1994, 5 (06) :989-993