Enhanced random search based incremental extreme learning machine

被引:707
作者
Huang, Guang-Bin [1 ]
Chen, Lei [1 ,2 ]
机构
[1] Nanyang Technol Univ, Sch Elect & Elect Engn, Singapore 639798, Singapore
[2] Natl Univ Singapore, Sch Comp, Singapore 117543, Singapore
关键词
Incremental extreme learning machine; Convergence rate; Random hidden nodes; Random search;
D O I
10.1016/j.neucom.2007.10.008
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently an incremental algorithm referred to as incremental extreme learning machine (I-ELM) was proposed by Huang et at. [G.-B. Huang, L. Chen, C.-K. Siew, Universal approximation using incremental constructive feedforward networks with random hidden nodes, IEEE Trans. Neural Networks 17(4) (2006) 879-892], which randomly generates hidden nodes and then analytically determines the output weights. Huang et al. [G.-B. Huang, L. Chen, C.-K. Siew, Universal approximation using incremental constructive feedforward networks with random hidden nodes, IEEE Trans. Neural Networks 17(4) (2006) 879-892] have proved in theory that although additive or RBF hidden nodes are generated randomly the network constructed by I-ELM can work as a universal approximator. During our recent study, it is found that some of the hidden nodes in such networks may play a very minor role in the network output and thus may eventually increase the network complexity. In order to avoid this issue and to obtain a more compact network architecture, this paper proposes an enhanced method for I-ELM (referred to as EI-ELM). At each learning step, several hidden nodes are randomly generated and among them the hidden node leading to the largest residual error decreasing will be added to the existing network and the output weight of the network will be calculated in a same simple way as in the original I-ELM. Generally speaking, the proposed enhanced I-ELM works for the widespread type of piecewise continuous hidden nodes. (C) 2007 Elsevier B.V. All rights reserved.
引用
收藏
页码:3460 / 3468
页数:9
相关论文
共 24 条
[1]   HINGING HYPERPLANES FOR REGRESSION, CLASSIFICATION, AND FUNCTION APPROXIMATION [J].
BREIMAN, L .
IEEE TRANSACTIONS ON INFORMATION THEORY, 1993, 39 (03) :999-1013
[2]   THE WAVELET TRANSFORM, TIME-FREQUENCY LOCALIZATION AND SIGNAL ANALYSIS [J].
DAUBECHIES, I .
IEEE TRANSACTIONS ON INFORMATION THEORY, 1990, 36 (05) :961-1005
[3]   ORTHONORMAL BASES OF COMPACTLY SUPPORTED WAVELETS [J].
DAUBECHIES, I .
COMMUNICATIONS ON PURE AND APPLIED MATHEMATICS, 1988, 41 (07) :909-996
[4]   LEARNING, INVARIANCE, AND GENERALIZATION IN HIGH-ORDER NEURAL NETWORKS [J].
GILES, CL ;
MAXWELL, T .
APPLIED OPTICS, 1987, 26 (23) :4972-4978
[5]   Improved extreme learning machine for function approximation by encoding a priori information [J].
Han, Fei ;
Huang, De-Shuang .
NEUROCOMPUTING, 2006, 69 (16-18) :2369-2373
[6]   Can threshold networks be trained directly? [J].
Huang, GB ;
Zhu, QY ;
Mao, KZ ;
Siew, CK ;
Saratchandran, P ;
Sundararajan, N .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2006, 53 (03) :187-191
[7]  
Huang GB, 2005, PROCEEDINGS OF THE IASTED INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE, P232
[8]  
Huang GB, 2004, I C CONT AUTOMAT ROB, P1029
[9]  
HUANG GB, 2007, NEUROCOMPUT IN PRESS, DOI DOI 10.1016/J.NEUCOM.2007.07.025
[10]   Convex incremental extreme learning machine [J].
Huang, Guang-Bin ;
Chen, Lei .
NEUROCOMPUTING, 2007, 70 (16-18) :3056-3062