A semi-supervised auto-encoder using label and sparse regularizations for classification

被引:30
作者
Chai, Zhilei [1 ,2 ]
Song, Wei [1 ,3 ]
Wang, Huiling [1 ,4 ]
Liu, Fei [1 ]
机构
[1] Jiangnan Univ, Sch IoT Engn, Wuxi, Peoples R China
[2] Minist Educ, Engn Res Ctr Internet Things Appl Technol, Beijing, Peoples R China
[3] Jiangnan Univ, Jiangsu Prov Engn Lab Pattern Recognit & Computat, Wuxi, Peoples R China
[4] Wuxi Taihu Univ, Sch IoT Engn, Wuxi, Peoples R China
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
Auto-encoder; Semi-supervised learning; Classification; ELM; DBN; NETWORK;
D O I
10.1016/j.asoc.2019.01.021
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The semi-supervised auto-encoder (SSAE) is a promising deep-learning method that integrates the advantages of unsupervised and supervised learning processes. The former learning process is designed to extract the underlying concepts of data as intrinsic information and enhance its generalization ability to express data. Furthermore, the supervised process tends to describe the rules of categorization with labels that further improve categorization accuracy. In this paper, we propose a novel semi-supervised learning method, namely, label and sparse regularization AE (LSRAE), by integrating label and sparse constraints to update the structure of the AE. The sparse regularization activates a minority of important neurons, while most of the other neurons are inhibited. Such a method ensures that LSRAE can yield a more local and informative structure of the data. Moreover, by implementing the label constraint, the supervised learning process can extract the features regulated by category rules and enhance the performance of the classifier in depth. To extensively test the performances of LSRAE, we perform our experiments on the benchmark datasets USPS, ISOLET and MNIST. The experimental results demonstrate the superiority of LSRAE in comparison with state-of-the-art feature extraction methods including AE, LSAE, SAE, ELM, DBN, and adaptive DBN. (C) 2019 Elsevier B.V. All rights reserved.
引用
收藏
页码:205 / 217
页数:13
相关论文
共 39 条
[1]  
[Anonymous], 2005, DATA MINING PRACTICA
[2]  
[Anonymous], CORR
[3]  
Bengio Y., ADV NEURAL INFORM PR, P153
[4]   Representation Learning: A Review and New Perspectives [J].
Bengio, Yoshua ;
Courville, Aaron ;
Vincent, Pascal .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (08) :1798-1828
[5]  
Chi M. V., 2017, APPL SOFT COMPUT, V58, P681
[6]  
Das S. R., 2017, J COMPUT INF SCI
[7]   Stacked Convolutional Denoising Auto-Encoders for Feature Representation [J].
Du, Bo ;
Xiong, Wei ;
Wu, Jia ;
Zhang, Lefei ;
Zhang, Liangpei ;
Tao, Dacheng .
IEEE TRANSACTIONS ON CYBERNETICS, 2017, 47 (04) :1017-1027
[8]   Combine HowNet lexicon to train phrase recursive autoencoder for sentence-level sentiment analysis [J].
Fu, Xianghua ;
Liu, Wangwang ;
Xu, Yingying ;
Cui, Laizhong .
NEUROCOMPUTING, 2017, 241 :18-27
[9]   Staff-line removal with selectional auto-encoders [J].
Gallego, Antonio-Javier ;
Calvo-Zaragoza, Jorge .
EXPERT SYSTEMS WITH APPLICATIONS, 2017, 89 :138-148
[10]   Semi-supervised learning using hidden feature augmentation [J].
Hang, Wenlong ;
Choi, Kup-Sze ;
Wang, Shitong ;
Qian, Pengjiang .
APPLIED SOFT COMPUTING, 2017, 59 :448-461