GA-SELM: Greedy algorithms for sparse extreme learning machine

被引:34
作者
Alcin, Omer F. [1 ]
Sengur, Abdulkadir [2 ]
Ghofrani, Sedigheh [3 ]
Ince, Melih C. [2 ]
机构
[1] Firat Univ, Dept Elect & Comp Sci, Tech Educ Fac, TR-23169 Elazig, Turkey
[2] Firat Univ, Fac Technol, Dept Elect & Elect Engn, TR-23169 Elazig, Turkey
[3] Islamic Azad Univ, Elect & Elect Engn Dept, Tehran South Branch, Tehran, Iran
关键词
ELM; Regularized ELM; Sparsity; Greedy algorithms; CoSaMP; IHT; OMP; StOMP; SLFNs; REGULARIZATION; REGRESSION;
D O I
10.1016/j.measurement.2014.04.012
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
In the last decade, extreme learning machine (ELM), which is a new learning algorithm for single-hidden layer feed forward networks (SLFNs), has gained much attention in the machine intelligence and pattern recognition communities with numerous successful real-world applications. The ELM structure has several advantageous such as good generalization performance with an extremely fast learning speed and low computational cost especially when dealing with many patterns defined in a high-dimensional space. However, three major problems usually appear using the ELM structure: (i) the dataset may have irrelevant variables, (ii) choosing the number of neurons in the hidden layer would be difficult, and (iii) it may encounter the singularity problem. To overcome these limitations, several methods have been proposed in the regularization framework. In this paper, we propose several sparse ELM schemes in which various greedy algorithms are used for sparse approximation of the output weights vector of the ELM network. In short, we name these new schemes as GA-SELM. We also investigate several greedy algorithms such as Compressive Sampling Matching Pursuit (CoSaMP), Iterative Hard Thresholding (IHT), Orthogonal Matching Pursuit (OMP) and Stagewise Orthogonal Matching Pursuit (StOMP) to obtain a regularized ELM scheme. These new ELM schemes have several benefits in comparing with the traditional ELM schemes such as low computational complexity, being free of parameter adjustment and avoiding the singularity problem. The proposed approach shows its significant advantages when it is compared with the empirical studies on nine commonly used regression benchmarks. Moreover, a comparison with the original ELM and the regularized ELM schemes is performed. (C) 2014 Elsevier Ltd. All rights reserved.
引用
收藏
页码:126 / 132
页数:7
相关论文
共 26 条
[1]  
[Anonymous], INT CONF ACOUST SPEE
[2]  
[Anonymous], MATLAB VERS 7 10 0
[3]  
[Anonymous], NIPS
[4]  
[Anonymous], 2010, UCI MACHINE LEARNING
[5]   Iterative Thresholding for Sparse Approximations [J].
Blumensath, Thomas ;
Davies, Mike E. .
JOURNAL OF FOURIER ANALYSIS AND APPLICATIONS, 2008, 14 (5-6) :629-654
[6]   Regularized Extreme Learning Machine [J].
Deng, Wanyu ;
Zheng, Qinghua ;
Chen, Lin .
2009 IEEE SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DATA MINING, 2009, :389-395
[7]   Sparse Solution of Underdetermined Systems of Linear Equations by Stagewise Orthogonal Matching Pursuit [J].
Donoho, David L. ;
Tsaig, Yaakov ;
Drori, Iddo ;
Starck, Jean-Luc .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2012, 58 (02) :1094-1121
[8]   Regularization Paths for Generalized Linear Models via Coordinate Descent [J].
Friedman, Jerome ;
Hastie, Trevor ;
Tibshirani, Rob .
JOURNAL OF STATISTICAL SOFTWARE, 2010, 33 (01) :1-22
[9]   Extreme learning machine: Theory and applications [J].
Huang, Guang-Bin ;
Zhu, Qin-Yu ;
Siew, Chee-Kheong .
NEUROCOMPUTING, 2006, 70 (1-3) :489-501
[10]   Extreme Learning Machine for Regression and Multiclass Classification [J].
Huang, Guang-Bin ;
Zhou, Hongming ;
Ding, Xiaojian ;
Zhang, Rui .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2012, 42 (02) :513-529