Multiple kernel extreme learning machine

被引:163
作者
Liu, Xinwang [1 ]
Wang, Lei [2 ]
Huang, Guang-Bin [3 ]
Zhang, Jian [4 ]
Yin, Jianping [1 ]
机构
[1] Natl Univ Def Technol, Sch Comp Sci, Changsha 410073, Hunan, Peoples R China
[2] Univ Wollongong, Sch Comp Sci & Software Engn, Wollongong, NSW 2522, Australia
[3] Nanyang Technol Univ, Sch Elect & Elect Engn, Singapore 639798, Singapore
[4] Univ Technol Sydney, Fac Engn & Informat Technol, Sydney, NSW 2007, Australia
基金
中国国家自然科学基金;
关键词
Extreme learning machine; Multiple kernel learning; Support vector machines; RECOGNITION; REGRESSION;
D O I
10.1016/j.neucom.2013.09.072
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Extreme learning machine (ELM) has been an important research topic over the last decade due to its high efficiency, easy-implementation, unification of classification and regression, and unification of binary and multi-class learning tasks. Though integrating these advantages. existing ELM algorithms pay little attention to optimizing the choice of kernels, which is indeed crucial to the performance of ELM in applications. More importantly, there is the lack of a general framework for ELM to integrate multiple heterogeneous data sources for classification. In this paper, we propose a general learning framework, termed multiple kernel extreme learning machines (MK-ELM), to address the above two issues. In the proposed MK-ELM, the optimal kernel combination weights and the structural parameters of ELM are jointly optimized. Following recent research on support vector machine (SVM) based MKL algorithms, we first design a sparse MK-ELM algorithm by imposing an l(1)-norm constraint on the kernel combination weights, and then extend it to a non-sparse scenario by substituting the l(1)-norm constraint with an l(p)-norm (p > 1) constraint. After that, a radius-incorporated MK-ELM algorithm which incorporates the radius of the minimum enclosing ball (MEB) is introduced. Three efficient optimization algorithms are proposed to solve the corresponding kernel learning problems. Comprehensive experiments have been conducted on Protein, Oxford Flower17, Caltech101 and Alzheimer's disease data sets to evaluate the performance of the proposed algorithms in terms of classification accuracy and computational efficiency. As the experimental results indicate, our proposed algorithms can achieve comparable or even better classification performance than state-of-the-art MKL algorithms, while incurring much less computational cost. (C) 2014 Elsevier ay. All rights reserved.
引用
收藏
页码:253 / 264
页数:12
相关论文
共 40 条
[1]  
[Anonymous], 2006, 2006 IEEE COMPUTER S
[2]  
Boyd S., 2004, CONVEX OPTIMIZATION
[3]   Extreme learning machine-based device displacement free activity recognition model [J].
Chen, Yiqiang ;
Zhao, Zhongtang ;
Wang, Shuangquan ;
Chen, Zhenyu .
SOFT COMPUTING, 2012, 16 (09) :1617-1625
[4]  
CORTES C, 1995, MACH LEARN, V20, P273, DOI 10.1023/A:1022627411411
[5]   Probabilistic multi-class multi-kernel learning: on protein fold recognition and remote homology detection [J].
Damoulas, Theodoros ;
Girolami, Mark A. .
BIOINFORMATICS, 2008, 24 (10) :1264-1270
[6]   Prediction of MCI to AD conversion, via MRI, CSF biomarkers, and pattern classification [J].
Davatzikos, Christos ;
Bhatt, Priyanka ;
Shaw, Leslie M. ;
Batmanghelich, Kayhan N. ;
Trojanowski, John Q. .
NEUROBIOLOGY OF AGING, 2011, 32 (12) :2322.e19-2322.e27
[7]  
Do H, 2009, LECT NOTES ARTIF INT, V5781, P330
[8]   Error Minimized Extreme Learning Machine With Growth of Hidden Nodes and Incremental Learning [J].
Feng, Guorui ;
Huang, Guang-Bin ;
Lin, Qingping ;
Gay, Robert .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2009, 20 (08) :1352-1357
[9]  
Frenay B., 2010, ESANN
[10]   Parameter-insensitive kernel in extreme learning for non-linear support vector regression [J].
Frenay, Benoit ;
Verleysen, Michel .
NEUROCOMPUTING, 2011, 74 (16) :2526-2531