Semi-Supervised and Unsupervised Extreme Learning Machines

被引:653
作者
Huang, Gao [1 ]
Song, Shiji [1 ]
Gupta, Jatinder N. D. [2 ]
Wu, Cheng [1 ]
机构
[1] Tsinghua Univ, Dept Automat, Beijing 100084, Peoples R China
[2] Univ Alabama, Coll Business Adm, Huntsville, AL 35899 USA
基金
中国国家自然科学基金;
关键词
Clustering; embedding; extreme learning machine (ELM); manifold regularization; semi-supervised learning; unsupervised learning; LEAST-SQUARES ALGORITHM; SUPPORT VECTOR MACHINES; FEEDFORWARD NETWORKS; REGRESSION; APPROXIMATION;
D O I
10.1109/TCYB.2014.2307349
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Extreme learning machines (ELMs) have proven to be efficient and effective learning mechanisms for pattern classification and regression. However, ELMs are primarily applied to supervised learning problems. Only a few existing research papers have used ELMs to explore unlabeled data. In this paper, we extend ELMs for both semi-supervised and unsupervised tasks based on the manifold regularization, thus greatly expanding the applicability of ELMs. The key advantages of the proposed algorithms are as follows: 1) both the semi-supervised ELM (SS-ELM) and the unsupervised ELM (US-ELM) exhibit learning capability and computational efficiency of ELMs; 2) both algorithms naturally handle multiclass classification or multi-cluster clustering; and 3) both algorithms are inductive and can handle unseen data at test time directly. Moreover, it is shown in this paper that all the supervised, semi-supervised, and unsupervised ELMs can actually be put into a unified framework. This provides new perspectives for understanding the mechanism of random feature mapping, which is the key concept in ELM theory. Empirical study on a wide range of data sets demonstrates that the proposed algorithms are competitive with the state-of-the-art semi-supervised or unsupervised learning algorithms in terms of accuracy and efficiency.
引用
收藏
页码:2405 / 2417
页数:13
相关论文
共 61 条
[1]  
[Anonymous], 2010, P 18 EUR S ART NEUR
[2]  
[Anonymous], 2008, P 25 INT C MACH LEAR
[3]  
[Anonymous], 1999, The Nature Statist. Learn. Theory
[4]  
[Anonymous], 2008, P 25 INT C MACH LEAR, DOI DOI 10.1145/1390156.1390279
[5]  
[Anonymous], 2013, P 30 INT C MACHINE L
[6]  
[Anonymous], 1998, COMBINATORIAL OPTIMI
[7]  
[Anonymous], 1997, Handbook of Matrices
[8]  
[Anonymous], 2010, UCI MACHINE LEARNING
[9]   Laplacian eigenmaps for dimensionality reduction and data representation [J].
Belkin, M ;
Niyogi, P .
NEURAL COMPUTATION, 2003, 15 (06) :1373-1396
[10]  
Belkin M, 2006, J MACH LEARN RES, V7, P2399