Generalized extreme learning machine autoencoder and a new deep neural network

被引:118
作者
Sun, Kai [1 ]
Zhang, Jiangshe [1 ]
Zhang, Chunxia [1 ]
Hu, Junying [1 ]
机构
[1] Xi An Jiao Tong Univ, Sch Math & Stat, Xian, Peoples R China
基金
中国国家自然科学基金;
关键词
Extreme learning machine; Generalized extreme learning machine autoencoder; Manifold regularization; Deep neural network; Multilayer generalized extreme learning machine autoencoder; FACE RECOGNITION; DIMENSIONALITY;
D O I
10.1016/j.neucom.2016.12.027
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Extreme learning machine (ELM) is an efficient learning algorithm of training single layer feed-forward neural networks (SLFNs). With the development of unsupervised learning in recent years, integrating ELM with autoencoder has become a new perspective for extracting feature using unlabeled data. In this paper, we propose a new variant of extreme learning machine autoencoder (ELM-AE) called generalized extreme learning machine autoencoder (GELM-AE) which adds the manifold regularization to the objective of ELM-AE. Some experiments carried out on real-world data sets show that GELM-AE outperforms some state-of-the-art unsupervised learning algorithms, including k-means, laplacian embedding (LE), spectral clustering (SC) and ELM-AE. Furthermore, we also propose a new deep neural network called multilayer generalized extreme learning machine autoencoder (ML-GELM) by stacking several GELM-AE to detect more abstract representations. The experiments results show that ML-GELM outperforms ELM and many other deep models, such as multilayer ELM autoencoder (ML-ELM), deep belief network (DBN) and stacked autoencoder (SAE). Due to the utilization of ELM, ML-GELM is also faster than DBN and SAE.
引用
收藏
页码:374 / 381
页数:8
相关论文
共 26 条
[1]  
[Anonymous], 2005, P ICML WORKSHOP LEAR
[2]  
[Anonymous], 2006, NIPS
[3]  
[Anonymous], 1984, C MODERN ANAL PROBAB
[4]  
[Anonymous], 2002, Advances in neural information processing systems
[5]   Laplacian eigenmaps for dimensionality reduction and data representation [J].
Belkin, M ;
Niyogi, P .
NEURAL COMPUTATION, 2003, 15 (06) :1373-1396
[6]  
Belkin M, 2006, J MACH LEARN RES, V7, P2399
[7]   Clustering in extreme learning machine feature space [J].
He, Qing ;
Jin, Xin ;
Du, Changying ;
Zhuang, Fuzhen ;
Shi, Zhongzhi .
NEUROCOMPUTING, 2014, 128 :88-95
[8]   Reducing the dimensionality of data with neural networks [J].
Hinton, G. E. ;
Salakhutdinov, R. R. .
SCIENCE, 2006, 313 (5786) :504-507
[9]   Training products of experts by minimizing contrastive divergence [J].
Hinton, GE .
NEURAL COMPUTATION, 2002, 14 (08) :1771-1800
[10]   A new deep neural network based on a stack of single-hidden-layer feedforward neural networks with randomly fixed hidden neurons [J].
Hu, Junying ;
Zhang, Jiangshe ;
Zhang, Chunxia ;
Wang, Juan .
NEUROCOMPUTING, 2016, 171 :63-72