Building feature space of extreme learning machine with sparse denoising stacked-autoencoder

被引:48
|
作者
Cao, Le-le [1 ]
Huang, Wen-bing [1 ]
Sun, Fu-chun [1 ]
机构
[1] Tsinghua Univ, Dept Comp Sci & Technol, Tsinghua Natl Lab Informat Sci & Technol TNList, Beijing 100084, Peoples R China
关键词
Extreme learning machine (ELM); Ridge regression; Feature space; Stacked autoencoder (SAE); Classification; Regression; FACE RECOGNITION; BELIEF NETWORKS; DEEP; REGRESSION;
D O I
10.1016/j.neucom.2015.02.096
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The random-hidden-node extreme learning machine (ELM) is a much more generalized cluster of single-hidden-layer feed-forward neural networks (SLFNs) which has three parts: random projection, nonlinear transformation, and ridge regression (RR) model. Networks with deep architectures have demonstrated state-of-the-art performance in a variety of settings, especially with computer vision tasks. Deep learning algorithms such as stacked autoencoder (SAE) and deep belief network (DEN) are built on learning several levels of representation of the input. Beyond simply learning features by stacking autoencoders (AE), there is a need for increasing its robustness to noise and reinforcing the sparsity of weights to make it easier to discover interesting and prominent features. The sparse AE and denoising AE was hence developed for this purpose. This paper proposes an approach: SSDAE-RR (stacked sparse denoising autoencoder - ridge regression) that effectively integrates the advantages in SAE, sparse AE, denoising AE, and the RR implementation in ELM algorithm. We conducted experimental study on real-world classification (binary and multiclass) and regression problems with different scales among several relevant approaches: SSDAE-RR, ELM, DBN, neural network (NN), and SAE. The performance analysis shows that the SSDAE-RR tends to achieve a better generalization ability on relatively large datasets (large sample size and high dimension) that were not pre-processed for feature abstraction. For 16 out of 18 tested datasets, the performance of SSDAE-RR is more stable than other tested approaches. We also note that the sparsity regularization and denoising mechanism seem to be mandatory for constructing interpretable feature representations. The fact that a SSDAE-RR approach often has a comparable training time to ELM makes it useful in some real applications. (C) 2015 Elsevier B.V. All rights reserved.
引用
收藏
页码:60 / 71
页数:12
相关论文
共 50 条
  • [21] Radar HRRP Target Recognition Based on Stacked Autoencoder and Extreme Learning Machine
    Zhao, Feixiang
    Liu, Yongxiang
    Huo, Kai
    Zhang, Shuanghui
    Zhang, Zhongshuai
    SENSORS, 2018, 18 (01)
  • [22] LTE uplink interference analysis combined with denoising autoencoder and extreme learning machine
    Xu H.-K.
    Jiang T.-T.
    Li X.
    Jiang B.-X.
    Wang Y.-L.
    Jilin Daxue Xuebao (Gongxueban)/Journal of Jilin University (Engineering and Technology Edition), 2022, 52 (01): : 195 - 203
  • [23] Discriminative Feature Learning With Distance Constrained Stacked Sparse Autoencoder for Hyperspectral Target Detection
    Shi, Yanzi
    Lei, Jie
    Yin, Yaping
    Cao, Kailang
    Li, Yunsong
    Chang, Chein-, I
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2019, 16 (09) : 1462 - 1466
  • [24] Network intrusion detection based on Contractive Sparse Stacked Denoising Autoencoder
    Lu, Jizhao
    Meng, Huiping
    Li, Wencui
    Liu, Yue
    Guo, Yihao
    Yang, Yang
    2021 IEEE INTERNATIONAL SYMPOSIUM ON BROADBAND MULTIMEDIA SYSTEMS AND BROADCASTING (BMSB), 2021,
  • [25] Enhanced Stacked Denoising Autoencoder-Based Feature Learning for Recognition of Wafer Map Defects
    Yu, Jianbo
    IEEE TRANSACTIONS ON SEMICONDUCTOR MANUFACTURING, 2019, 32 (04) : 613 - 624
  • [26] Manifold Regularized Stacked Autoencoder for Feature Learning
    Lu, Sicong
    Liu, Huaping
    Li, Chunwen
    2015 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC 2015): BIG DATA ANALYTICS FOR HUMAN-CENTRIC SYSTEMS, 2015, : 2950 - 2955
  • [27] Feature Selection with Optimal Stacked Sparse Autoencoder for Data Mining
    Hamza, Manar Ahmed
    Hassine, Siwar Ben Haj
    Abunadi, Ibrahim
    Al-Wesabi, Fahd N.
    Alsolai, Hadeel
    Hilal, Anwer Mustafa
    Yaseen, Ishfaq
    Motwakel, Abdelwahed
    CMC-COMPUTERS MATERIALS & CONTINUA, 2022, 72 (02): : 2581 - 2596
  • [28] AN ENHANCED HIERARCHICAL EXTREME LEARNING MACHINE WITH RANDOM SPARSE MATRIX BASED AUTOENCODER
    Wang, Tianlei
    Lai, Xiaoping
    Cao, Jiuwen
    Vong, Chi-Man
    Chen, Badong
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 3817 - 3821
  • [29] Remote Sensing Image Classification Based on Ensemble Extreme Learning Machine With Stacked Autoencoder
    Lv, Fei
    Han, Min
    Qiu, Tie
    IEEE ACCESS, 2017, 5 : 9021 - 9031
  • [30] A Robust Acoustic Feature Extraction Approach Based On Stacked Denoising Autoencoder
    Liu, J. H.
    Zheng, W. Q.
    Zou, Y. X.
    2015 1ST IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA BIG DATA (BIGMM), 2015, : 124 - 127