A deep contractive autoencoder for solving multiclass classification problems

被引:15
作者
Aamir, Muhammad [1 ]
Mohd Nawi, Nazri [2 ]
Wahid, Fazli [3 ]
Mahdin, Hairulnizam [1 ]
机构
[1] Univ Tun Hussein Onn, Fac Comp Sci & Informat Technol, Batu Pahat, Malaysia
[2] Univ Tun Hussein Onn, Soft Comp & Data Min Ctr, Batu Pahat, Malaysia
[3] Univ Lahore, Fac Comp Sci & Informat Technol, Gujrat Campus, Gujrat, Pakistan
关键词
Deep auto encoder; Contractive auto encoder; Feature reduction; Classification; MNIST variants; ALGORITHM; NETWORK;
D O I
10.1007/s12065-020-00424-6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Contractive auto encoder (CAE) is on of the most robust variant of standard Auto Encoder (AE). The major drawback associated with the conventional CAE is its higher reconstruction error during encoding and decoding process of input features to the network. This drawback in the operational procedure of CAE leads to its incapability of going into finer details present in the input features by missing the information worth consideration. Resultantly, the features extracted by CAE lack the true representation of all the input features and the classifier fails in solving classification problems efficiently. In this work, an improved variant of CAE is proposed based on layered architecture following feed forward mechanism named as deep CAE. In the proposed architecture, the normal CAEs are arranged in layers and inside each layer, the process of encoding and decoding take place. The features obtained from the previous CAE are given as inputs to the next CAE. Each CAE in all layers are responsible for reducing the reconstruction error thus resulting in obtaining the informative features. The feature set obtained from the last CAE is given as input to the softmax classifier for classification. The performance and efficiency of the proposed model has been tested on five MNIST variant-datasets. The results have been compared with standard SAE, DAE, RBM, SCAE, ScatNet and PCANet in term of training error, testing error and execution time. The results revealed that the proposed model outperform the aforementioned models.
引用
收藏
页码:1619 / 1633
页数:15
相关论文
共 44 条
  • [21] HSAE: A Hessian regularized sparse auto-encoders
    Liu, Weifeng
    Ma, Tengzhou
    Tao, Dapeng
    You, Jane
    [J]. NEUROCOMPUTING, 2016, 187 : 59 - 65
  • [22] Multimodal video classification with stacked contractive autoencoders
    Liu, Yanan
    Feng, Xiaoqing
    Zhou, Zhiguang
    [J]. SIGNAL PROCESSING, 2016, 120 : 761 - 766
  • [23] Mescheder L., 2017, ARXIV170104722
  • [24] Montavon G., 2016, ADV NEURAL INFORM PR, P3718
  • [25] Nawi NM, 2010, COMM COM INF SC, V118, P177
  • [26] NAWI NM, 2017, INT J ADV SCI ENG IN, V7, P1693, DOI DOI 10.18517/IJASEIT.7.5.2972
  • [27] Nishino K, 2016, AAAI CONF ARTIF INTE, P4244
  • [28] Rifai S., 2011, P 28 INT C INT C MAC, P833
  • [29] Rifai S, 2011, LECT NOTES ARTIF INT, V6912, P645, DOI 10.1007/978-3-642-23783-6_41
  • [30] Group sparse autoencoder
    Sankaran, Anush
    Vatsa, Mayank
    Singh, Richa
    Majumdar, Angshul
    [J]. IMAGE AND VISION COMPUTING, 2017, 60 : 64 - 74