Autoencoder With Invertible Functions for Dimension Reduction and Image Reconstruction

被引:72
作者
Yang, Yimin [1 ]
Wu, Q. M. Jonathan [1 ,2 ]
Wang, Yaonan [3 ]
机构
[1] Univ Windsor, Dept Elect & Comp Engn, Windsor, ON N9B 3P4, Canada
[2] Shanghai Jiao Tong Univ, Dept Comp Sci & Engn, Shanghai 200240, Peoples R China
[3] Hunan Univ, Coll Elect & Informat Engn, Changsha 410082, Hunan, Peoples R China
来源
IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS | 2018年 / 48卷 / 07期
基金
中国国家自然科学基金; 加拿大自然科学与工程研究理事会;
关键词
Autoencoder; deep learning (DL); dimension reduction; extreme learning machine (ELM); feature selection; generalization performance; EXTREME LEARNING-MACHINE; NEURAL-NETWORKS; K-SVD; FEATURES; RECOGNITION; ALGORITHM; FUSION; MODEL;
D O I
10.1109/TSMC.2016.2637279
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The extreme learning machine (ELM), which was originally proposed for "generalized" single-hidden layer feed-forward neural networks, provides efficient unified learning solutions for the applications of regression and classification. Although, it provides promising performance and robustness and has been used for various applications, the single-layer architecture possibly lacks the effectiveness when applied for natural signals. In order to over come this shortcoming, the following work indicates a new architecture based on multilayer network framework. The significant contribution of this paper are as follows: 1) unlike existing multilayer ELM, in which hidden nodes are obtained randomly, in this paper all hidden layers with invertible functions are calculated by pulling the network output back and putting it into hidden layers. Thus, the feature learning is enriched by additional information, which results in better performance; 2) in contrast to the existing multilayer network methods, which are usually efficient for classification applications, the proposed architecture is implemented for dimension reduction and image reconstruction; and 3) unlike other iterative learning-based deep networks (DL), the hidden layers of the proposed method are obtained via four steps. Therefore, it has much better learning efficiency than DL. Experimental results on 33 datasets indicate that, in comparison to the other existing dimension reduction techniques, the proposed method performs competitively better with fast training speeds.
引用
收藏
页码:1065 / 1079
页数:15
相关论文
共 90 条
[1]   Stochastic proximity embedding [J].
Agrafiotis, DK .
JOURNAL OF COMPUTATIONAL CHEMISTRY, 2003, 24 (10) :1215-1221
[2]   K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation [J].
Aharon, Michal ;
Elad, Michael ;
Bruckstein, Alfred .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2006, 54 (11) :4311-4322
[3]  
[Anonymous], 2012, P 26 INT C NEUR INF
[4]  
[Anonymous], 2006, P IEEE C COMPUTER VI, DOI DOI 10.1109/CVPR.2006.301
[5]  
[Anonymous], 2011, INT C ART INT STAT
[6]  
[Anonymous], IEEE T NEURAL NETW L
[7]  
[Anonymous], 2002, ADV NEURAL INFORM PR
[8]  
[Anonymous], IEEE T CYBERN
[9]  
[Anonymous], P ADV NEUR INF PROC
[10]  
[Anonymous], IEEE T CYBERN