What are Extreme Learning Machines? Filling the Gap Between Frank Rosenblatt's Dream and John von Neumann's Puzzle

被引:377
作者
Huang, Guang-Bin [1 ]
机构
[1] Nanyang Technol Univ, Sch Elect & Elect Engn, Singapore 639798, Singapore
关键词
Extreme learning machine; Random vector functional link; QuickNet; Radial basis function network; Feedforward neural network; Randomness; UNIVERSAL APPROXIMATION; FEEDFORWARD NETWORKS; MIXED SELECTIVITY; NEURAL-NETWORKS; ALGORITHM; INFORMATION; NEURONS; MODEL;
D O I
10.1007/s12559-015-9333-0
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The emergent machine learning technique-extreme learning machines (ELMs)-has become a hot area of research over the past years, which is attributed to the growing research activities and significant contributions made by numerous researchers around the world. Recently, it has come to our attention that a number of misplaced notions and misunderstandings are being dissipated on the relationships between ELM and some earlier works. This paper wishes to clarify that (1) ELM theories manage to address the open problem which has puzzled the neural networks, machine learning and neuroscience communities for 60 years: whether hidden nodes/neurons need to be tuned in learning, and proved that in contrast to the common knowledge and conventional neural network learning tenets, hidden nodes/neurons do not need to be iteratively tuned in wide types of neural networks and learning models (Fourier series, biological learning, etc.). Unlike ELM theories, none of those earlier works provides theoretical foundations on feedforward neural networks with random hidden nodes; (2) ELM is proposed for both generalized single-hidden-layer feedforward network and multi-hidden-layer feedforward networks (including biological neural networks); (3) homogeneous architecture-based ELM is proposed for feature learning, clustering, regression and (binary/multi-class) classification. (4) Compared to ELM, SVM and LS-SVM tend to provide suboptimal solutions, and SVM and LS-SVM do not consider feature representations in hidden layers of multi-hidden-layer feedforward networks either.
引用
收藏
页码:263 / 278
页数:16
相关论文
共 64 条
[41]  
McDonnell MD, 2015, IEEE IJCNN
[42]   OP-ELM: Optimally Pruned Extreme Learning Machine [J].
Miche, Yoan ;
Sorjamaa, Antti ;
Bas, Patrick ;
Simula, Olli ;
Jutten, Christian ;
Lendasse, Amaury .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2010, 21 (01) :158-162
[43]  
Minsky M., 1969, Perceptrons: An Introduction to Computational Geometry
[44]   LEARNING AND GENERALIZATION CHARACTERISTICS OF THE RANDOM VECTOR FUNCTIONAL-LINK NET [J].
PAO, YH ;
PARK, GH ;
SOBAJIC, DJ .
NEUROCOMPUTING, 1994, 6 (02) :163-180
[45]   Universal Approximation Using Radial-Basis-Function Networks [J].
Park, J. ;
Sandberg, I. W. .
NEURAL COMPUTATION, 1991, 3 (02) :246-257
[46]  
Poggio T, 2001, 2001011 AI MIT
[47]   Universal Approximation with Convex Optimization: Gimmick or Reality? [J].
Principe, Jose C. ;
Chen, Badong .
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE, 2015, 10 (02) :68-77
[48]  
Rahimi A, 2007, P 20 INT C NEURAL IN, V20, P1177, DOI DOI 10.5555/2981562.2981710
[49]   Uniform Approximation of Functions with Random Bases [J].
Rahimi, Ali ;
Recht, Benjamin .
2008 46TH ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING, VOLS 1-3, 2008, :555-+
[50]   The importance of mixed selectivity in complex cognitive tasks [J].
Rigotti, Mattia ;
Barak, Omri ;
Warden, Melissa R. ;
Wang, Xiao-Jing ;
Daw, Nathaniel D. ;
Miller, Earl K. ;
Fusi, Stefano .
NATURE, 2013, 497 (7451) :585-590