Building feature space of extreme learning machine with sparse denoising stacked-autoencoder

被引:48
|
作者
Cao, Le-le [1 ]
Huang, Wen-bing [1 ]
Sun, Fu-chun [1 ]
机构
[1] Tsinghua Univ, Dept Comp Sci & Technol, Tsinghua Natl Lab Informat Sci & Technol TNList, Beijing 100084, Peoples R China
关键词
Extreme learning machine (ELM); Ridge regression; Feature space; Stacked autoencoder (SAE); Classification; Regression; FACE RECOGNITION; BELIEF NETWORKS; DEEP; REGRESSION;
D O I
10.1016/j.neucom.2015.02.096
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The random-hidden-node extreme learning machine (ELM) is a much more generalized cluster of single-hidden-layer feed-forward neural networks (SLFNs) which has three parts: random projection, nonlinear transformation, and ridge regression (RR) model. Networks with deep architectures have demonstrated state-of-the-art performance in a variety of settings, especially with computer vision tasks. Deep learning algorithms such as stacked autoencoder (SAE) and deep belief network (DEN) are built on learning several levels of representation of the input. Beyond simply learning features by stacking autoencoders (AE), there is a need for increasing its robustness to noise and reinforcing the sparsity of weights to make it easier to discover interesting and prominent features. The sparse AE and denoising AE was hence developed for this purpose. This paper proposes an approach: SSDAE-RR (stacked sparse denoising autoencoder - ridge regression) that effectively integrates the advantages in SAE, sparse AE, denoising AE, and the RR implementation in ELM algorithm. We conducted experimental study on real-world classification (binary and multiclass) and regression problems with different scales among several relevant approaches: SSDAE-RR, ELM, DBN, neural network (NN), and SAE. The performance analysis shows that the SSDAE-RR tends to achieve a better generalization ability on relatively large datasets (large sample size and high dimension) that were not pre-processed for feature abstraction. For 16 out of 18 tested datasets, the performance of SSDAE-RR is more stable than other tested approaches. We also note that the sparsity regularization and denoising mechanism seem to be mandatory for constructing interpretable feature representations. The fact that a SSDAE-RR approach often has a comparable training time to ELM makes it useful in some real applications. (C) 2015 Elsevier B.V. All rights reserved.
引用
收藏
页码:60 / 71
页数:12
相关论文
共 50 条
  • [11] Denoising and feature extraction of weld seam profiles by stacked denoising autoencoder
    Li, Ran
    Gao, Hongming
    WELDING IN THE WORLD, 2021, 65 (09) : 1725 - 1733
  • [12] Denoising and feature extraction of weld seam profiles by stacked denoising autoencoder
    Ran Li
    Hongming Gao
    Welding in the World, 2021, 65 : 1725 - 1733
  • [13] Intelligent analysis of tool wear state using stacked denoising autoencoder with online sequential-extreme learning machine
    Ou, Jiayu
    Li, Hongkun
    Huang, Gangjin
    Yang, Guowei
    MEASUREMENT, 2021, 167
  • [14] An Automatic Feature Learning and Fault Diagnosis Method Based on Stacked Sparse Autoencoder
    Qi, Yumei
    Shen, Changqing
    Liu, Jie
    Li, Xuwei
    Li, Dading
    Zhu, Zhongkui
    ADVANCED MANUFACTURING AND AUTOMATION VII, 2018, 451 : 367 - 375
  • [15] Electricity theft detection based on stacked sparse denoising autoencoder
    Huang, Yifan
    Xu, Qifeng
    INTERNATIONAL JOURNAL OF ELECTRICAL POWER & ENERGY SYSTEMS, 2021, 125
  • [16] Stacked Denoising Autoencoder for Feature Representation Learning in Pose-Based Action Recognition
    Budiman, Arif
    Fanany, Mohamad Ivan
    Basaruddin, Chan
    2014 IEEE 3RD GLOBAL CONFERENCE ON CONSUMER ELECTRONICS (GCCE), 2014, : 684 - 688
  • [17] Low-level structure feature extraction for image processing via stacked sparse denoising autoencoder
    Fan, Zunlin
    Bi, Duyan
    He, Linyuan
    Ma Shiping
    Gao, Shan
    Li, Cheng
    NEUROCOMPUTING, 2017, 243 : 12 - 20
  • [18] A sparse stacked denoising autoencoder with optimized transfer learning applied to the fault diagnosis of rolling bearings
    Sun, Meidi
    Wang, Hui
    Liu, Ping
    Huang, Shoudao
    Fan, Peng
    MEASUREMENT, 2019, 146 : 305 - 314
  • [19] Temporal denoising and deep feature learning for enhanced defect detection in thermography using stacked denoising convolution autoencoder
    Yerneni, Naga Prasanthi
    Ghali, V. S.
    Vesala, G. T.
    Wang, Fei
    Mulaveesala, Ravibabu
    INFRARED PHYSICS & TECHNOLOGY, 2024, 143
  • [20] Clustering in extreme learning machine feature space
    He, Qing
    Jin, Xin
    Du, Changying
    Zhuang, Fuzhen
    Shi, Zhongzhi
    NEUROCOMPUTING, 2014, 128 : 88 - 95