Building feature space of extreme learning machine with sparse denoising stacked-autoencoder

被引:48
|
作者
Cao, Le-le [1 ]
Huang, Wen-bing [1 ]
Sun, Fu-chun [1 ]
机构
[1] Tsinghua Univ, Dept Comp Sci & Technol, Tsinghua Natl Lab Informat Sci & Technol TNList, Beijing 100084, Peoples R China
关键词
Extreme learning machine (ELM); Ridge regression; Feature space; Stacked autoencoder (SAE); Classification; Regression; FACE RECOGNITION; BELIEF NETWORKS; DEEP; REGRESSION;
D O I
10.1016/j.neucom.2015.02.096
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The random-hidden-node extreme learning machine (ELM) is a much more generalized cluster of single-hidden-layer feed-forward neural networks (SLFNs) which has three parts: random projection, nonlinear transformation, and ridge regression (RR) model. Networks with deep architectures have demonstrated state-of-the-art performance in a variety of settings, especially with computer vision tasks. Deep learning algorithms such as stacked autoencoder (SAE) and deep belief network (DEN) are built on learning several levels of representation of the input. Beyond simply learning features by stacking autoencoders (AE), there is a need for increasing its robustness to noise and reinforcing the sparsity of weights to make it easier to discover interesting and prominent features. The sparse AE and denoising AE was hence developed for this purpose. This paper proposes an approach: SSDAE-RR (stacked sparse denoising autoencoder - ridge regression) that effectively integrates the advantages in SAE, sparse AE, denoising AE, and the RR implementation in ELM algorithm. We conducted experimental study on real-world classification (binary and multiclass) and regression problems with different scales among several relevant approaches: SSDAE-RR, ELM, DBN, neural network (NN), and SAE. The performance analysis shows that the SSDAE-RR tends to achieve a better generalization ability on relatively large datasets (large sample size and high dimension) that were not pre-processed for feature abstraction. For 16 out of 18 tested datasets, the performance of SSDAE-RR is more stable than other tested approaches. We also note that the sparsity regularization and denoising mechanism seem to be mandatory for constructing interpretable feature representations. The fact that a SSDAE-RR approach often has a comparable training time to ELM makes it useful in some real applications. (C) 2015 Elsevier B.V. All rights reserved.
引用
收藏
页码:60 / 71
页数:12
相关论文
共 50 条
  • [1] Stacked Denoising Extreme Learning Machine Autoencoder Based on Graph Embedding for Feature Representation
    Ge, Hongwei
    Sun, Weiting
    Zhao, Mingde
    Yao, Yao
    IEEE ACCESS, 2019, 7 : 13433 - 13444
  • [2] Towards enhancing stacked extreme learning machine with sparse autoencoder by correntropy
    Luo, Xiong
    Xu, Yang
    Wang, Weiping
    Yuan, Manman
    Ban, Xiaojuan
    Zhu, Yueqin
    Zhao, Wenbing
    JOURNAL OF THE FRANKLIN INSTITUTE-ENGINEERING AND APPLIED MATHEMATICS, 2018, 355 (04): : 1945 - 1966
  • [3] Research of stacked denoising sparse autoencoder
    Lingheng Meng
    Shifei Ding
    Nan Zhang
    Jian Zhang
    Neural Computing and Applications, 2018, 30 : 2083 - 2100
  • [4] Research of stacked denoising sparse autoencoder
    Meng, Lingheng
    Ding, Shifei
    Zhang, Nan
    Zhang, Jian
    NEURAL COMPUTING & APPLICATIONS, 2018, 30 (07): : 2083 - 2100
  • [5] Sediment Classification of Acoustic Backscatter Image Based on Stacked Denoising Autoencoder and Modified Extreme Learning Machine
    Zhou, Ping
    Chen, Gang
    Wang, Mingwei
    Chen, Jifa
    Li, Yizhe
    REMOTE SENSING, 2020, 12 (22) : 1 - 18
  • [6] Software defect prediction based on stacked sparse denoising autoencoders and enhanced extreme learning machine
    Zhang, Nana
    Ying, Shi
    Zhu, Kun
    Zhu, Dandan
    IET SOFTWARE, 2022, 16 (01) : 29 - 47
  • [7] Denoising deep extreme learning machine for sparse representation
    Xiangyi Cheng
    Huaping Liu
    Xinying Xu
    Fuchun Sun
    Memetic Computing, 2017, 9 : 199 - 212
  • [8] Denoising deep extreme learning machine for sparse representation
    Cheng, Xiangyi
    Liu, Huaping
    Xu, Xinying
    Sun, Fuchun
    MEMETIC COMPUTING, 2017, 9 (03) : 199 - 212
  • [9] Classification of Thyroid Nodules with Stacked Denoising Sparse Autoencoder
    Li, Zexin
    Yang, Kaiji
    Zhang, Lili
    Wei, Chiju
    Yang, Peixuan
    Xu, Wencan
    INTERNATIONAL JOURNAL OF ENDOCRINOLOGY, 2020, 2020
  • [10] Intelligent fault diagnosis approach with unsupervised feature learning by stacked denoising autoencoder
    Xia, Min
    Li, Teng
    Liu, Lizhi
    Xu, Lin
    de Silva, Clarence W.
    IET SCIENCE MEASUREMENT & TECHNOLOGY, 2017, 11 (06) : 687 - 695