What are Extreme Learning Machines? Filling the Gap Between Frank Rosenblatt's Dream and John von Neumann's Puzzle

被引:363
作者
Huang, Guang-Bin [1 ]
机构
[1] Nanyang Technol Univ, Sch Elect & Elect Engn, Singapore 639798, Singapore
关键词
Extreme learning machine; Random vector functional link; QuickNet; Radial basis function network; Feedforward neural network; Randomness; UNIVERSAL APPROXIMATION; FEEDFORWARD NETWORKS; MIXED SELECTIVITY; NEURAL-NETWORKS; ALGORITHM; INFORMATION; NEURONS; MODEL;
D O I
10.1007/s12559-015-9333-0
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The emergent machine learning technique-extreme learning machines (ELMs)-has become a hot area of research over the past years, which is attributed to the growing research activities and significant contributions made by numerous researchers around the world. Recently, it has come to our attention that a number of misplaced notions and misunderstandings are being dissipated on the relationships between ELM and some earlier works. This paper wishes to clarify that (1) ELM theories manage to address the open problem which has puzzled the neural networks, machine learning and neuroscience communities for 60 years: whether hidden nodes/neurons need to be tuned in learning, and proved that in contrast to the common knowledge and conventional neural network learning tenets, hidden nodes/neurons do not need to be iteratively tuned in wide types of neural networks and learning models (Fourier series, biological learning, etc.). Unlike ELM theories, none of those earlier works provides theoretical foundations on feedforward neural networks with random hidden nodes; (2) ELM is proposed for both generalized single-hidden-layer feedforward network and multi-hidden-layer feedforward networks (including biological neural networks); (3) homogeneous architecture-based ELM is proposed for feature learning, clustering, regression and (binary/multi-class) classification. (4) Compared to ELM, SVM and LS-SVM tend to provide suboptimal solutions, and SVM and LS-SVM do not consider feature representations in hidden layers of multi-hidden-layer feedforward networks either.
引用
收藏
页码:263 / 278
页数:16
相关论文
共 64 条
  • [1] [Anonymous], 1988, Commun. Pure Appl. Math, DOI DOI 10.1002/CPA.3160410705
  • [2] [Anonymous], 2002, Least Squares Support Vector Machines, DOI DOI 10.1142/5089
  • [3] Sparse Extreme Learning Machine for Classification
    Bai, Zuo
    Huang, Guang-Bin
    Wang, Danwei
    Wang, Han
    Westover, M. Brandon
    [J]. IEEE TRANSACTIONS ON CYBERNETICS, 2014, 44 (10) : 1858 - 1870
  • [4] The Sparseness of Mixed Selectivity Neurons Controls the Generalization-Discrimination Trade-Off
    Barak, Omri
    Rigotti, Mattia
    Fusi, Stefano
    [J]. JOURNAL OF NEUROSCIENCE, 2013, 33 (09) : 3844 - 3856
  • [5] The sample complexity of pattern classification with neural networks: The size of the weights is more important than the size of the network
    Bartlett, PL
    [J]. IEEE TRANSACTIONS ON INFORMATION THEORY, 1998, 44 (02) : 525 - 536
  • [6] Baum E. B., 1988, Journal of Complexity, V4, P193, DOI 10.1016/0885-064X(88)90020-9
  • [7] Feature selection for nonlinear models with extreme learning machines
    Benoit, Frenay
    van Heeswijk, Mark
    Miche, Yoan
    Verleysen, Michel
    Lendasse, Amaury
    [J]. NEUROCOMPUTING, 2013, 102 : 111 - 124
  • [8] Broomhead D. S., 1988, Complex Systems, V2, P321
  • [9] A rapid supervised learning neural network for function interpolation and approximation
    Chen, CLP
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 1996, 7 (05): : 1220 - 1230
  • [10] A rapid learning and dynamic stepwise updating algorithm for flat neural networks and the application to time-series prediction
    Chen, CLP
    Wan, JZ
    [J]. IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 1999, 29 (01): : 62 - 72