A deep contractive autoencoder for solving multiclass classification problems

被引:15
作者
Aamir, Muhammad [1 ]
Mohd Nawi, Nazri [2 ]
Wahid, Fazli [3 ]
Mahdin, Hairulnizam [1 ]
机构
[1] Univ Tun Hussein Onn, Fac Comp Sci & Informat Technol, Batu Pahat, Malaysia
[2] Univ Tun Hussein Onn, Soft Comp & Data Min Ctr, Batu Pahat, Malaysia
[3] Univ Lahore, Fac Comp Sci & Informat Technol, Gujrat Campus, Gujrat, Pakistan
关键词
Deep auto encoder; Contractive auto encoder; Feature reduction; Classification; MNIST variants; ALGORITHM; NETWORK;
D O I
10.1007/s12065-020-00424-6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Contractive auto encoder (CAE) is on of the most robust variant of standard Auto Encoder (AE). The major drawback associated with the conventional CAE is its higher reconstruction error during encoding and decoding process of input features to the network. This drawback in the operational procedure of CAE leads to its incapability of going into finer details present in the input features by missing the information worth consideration. Resultantly, the features extracted by CAE lack the true representation of all the input features and the classifier fails in solving classification problems efficiently. In this work, an improved variant of CAE is proposed based on layered architecture following feed forward mechanism named as deep CAE. In the proposed architecture, the normal CAEs are arranged in layers and inside each layer, the process of encoding and decoding take place. The features obtained from the previous CAE are given as inputs to the next CAE. Each CAE in all layers are responsible for reducing the reconstruction error thus resulting in obtaining the informative features. The feature set obtained from the last CAE is given as input to the softmax classifier for classification. The performance and efficiency of the proposed model has been tested on five MNIST variant-datasets. The results have been compared with standard SAE, DAE, RBM, SCAE, ScatNet and PCANet in term of training error, testing error and execution time. The results revealed that the proposed model outperform the aforementioned models.
引用
收藏
页码:1619 / 1633
页数:15
相关论文
共 44 条
  • [1] Auto-Encoder Variants for Solving Handwritten Digits Classification Problem
    Aamir, Muhammad
    Nawi, Nazri Mohd
    Bin Mahdin, Hairulnizam
    Naseem, Rashid
    Zulqarnain, Muhammad
    [J]. INTERNATIONAL JOURNAL OF FUZZY LOGIC AND INTELLIGENT SYSTEMS, 2020, 20 (01) : 8 - 16
  • [2] Aamir M, 2019, INT J ADV COMPUT SC, V10, P416
  • [3] Abadi M., 2016, TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems, DOI [DOI 10.48550/ARXIV.1605.08695, 10.5555/3026877.3026899]
  • [4] Bengio Y., 2013, P 26 INT C NEUR INF, V1, P899
  • [5] Limited memory BFGS method based on a high-order tensor model
    Biglari, Fahimeh
    Ebadian, Ali
    [J]. COMPUTATIONAL OPTIMIZATION AND APPLICATIONS, 2015, 60 (02) : 413 - 422
  • [6] PCANet: A Simple Deep Learning Baseline for Image Classification?
    Chan, Tsung-Han
    Jia, Kui
    Gao, Shenghua
    Lu, Jiwen
    Zeng, Zinan
    Ma, Yi
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2015, 24 (12) : 5017 - 5032
  • [7] Chollet F., 2018, ASTROPHYSICS SOURCE
  • [8] Learning Understandable Neural Networks With Nonnegative Weight Constraints
    Chorowski, Jan
    Zurada, Jacek M.
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2015, 26 (01) : 62 - 69
  • [9] Multimodal Deep Autoencoder for Human Pose Recovery
    Hong, Chaoqun
    Yu, Jun
    Wan, Jian
    Tao, Dacheng
    Wang, Meng
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2015, 24 (12) : 5659 - 5670
  • [10] Deep Learning of Part-Based Representation of Data Using Sparse Autoencoders With Nonnegativity Constraints
    Hosseini-Asl, Ehsan
    Zurada, Jacek M.
    Nasraoui, Olfa
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2016, 27 (12) : 2486 - 2498