Autoencoder With Invertible Functions for Dimension Reduction and Image Reconstruction

被引:72
作者
Yang, Yimin [1 ]
Wu, Q. M. Jonathan [1 ,2 ]
Wang, Yaonan [3 ]
机构
[1] Univ Windsor, Dept Elect & Comp Engn, Windsor, ON N9B 3P4, Canada
[2] Shanghai Jiao Tong Univ, Dept Comp Sci & Engn, Shanghai 200240, Peoples R China
[3] Hunan Univ, Coll Elect & Informat Engn, Changsha 410082, Hunan, Peoples R China
来源
IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS | 2018年 / 48卷 / 07期
基金
中国国家自然科学基金; 加拿大自然科学与工程研究理事会;
关键词
Autoencoder; deep learning (DL); dimension reduction; extreme learning machine (ELM); feature selection; generalization performance; EXTREME LEARNING-MACHINE; NEURAL-NETWORKS; K-SVD; FEATURES; RECOGNITION; ALGORITHM; FUSION; MODEL;
D O I
10.1109/TSMC.2016.2637279
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The extreme learning machine (ELM), which was originally proposed for "generalized" single-hidden layer feed-forward neural networks, provides efficient unified learning solutions for the applications of regression and classification. Although, it provides promising performance and robustness and has been used for various applications, the single-layer architecture possibly lacks the effectiveness when applied for natural signals. In order to over come this shortcoming, the following work indicates a new architecture based on multilayer network framework. The significant contribution of this paper are as follows: 1) unlike existing multilayer ELM, in which hidden nodes are obtained randomly, in this paper all hidden layers with invertible functions are calculated by pulling the network output back and putting it into hidden layers. Thus, the feature learning is enriched by additional information, which results in better performance; 2) in contrast to the existing multilayer network methods, which are usually efficient for classification applications, the proposed architecture is implemented for dimension reduction and image reconstruction; and 3) unlike other iterative learning-based deep networks (DL), the hidden layers of the proposed method are obtained via four steps. Therefore, it has much better learning efficiency than DL. Experimental results on 33 datasets indicate that, in comparison to the other existing dimension reduction techniques, the proposed method performs competitively better with fast training speeds.
引用
收藏
页码:1065 / 1079
页数:15
相关论文
共 90 条
[41]   An Efficient Method for Traffic Sign Recognition Based on Extreme Learning Machine [J].
Huang, Zhiyong ;
Yu, Yuanlong ;
Gu, Jason ;
Liu, Huaping .
IEEE TRANSACTIONS ON CYBERNETICS, 2017, 47 (04) :920-933
[42]   STOCHASTIC CHOICE OF BASIS FUNCTIONS IN ADAPTIVE FUNCTION APPROXIMATION AND THE FUNCTIONAL-LINK NET [J].
IGELNIK, B ;
PAO, YH .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1995, 6 (06) :1320-1329
[43]  
Jain P., 2008, PROC IEEE INT C COMP, P1
[44]   Label Consistent K-SVD: Learning a Discriminative Dictionary for Recognition [J].
Jiang, Zhuolin ;
Lin, Zhe ;
Davis, Larry S. .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (11) :2651-2664
[45]  
Kasun LLC, 2013, IEEE INTELL SYST, V28, P31
[46]   Discriminative Color Descriptors [J].
Khan, Rahat ;
Van de Weijer, Joost ;
Khan, Fahad Shahbaz ;
Muselet, Damien ;
Ducottet, Christophe ;
Barat, Cecile .
2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2013, :2866-2873
[47]  
Krizhevsky A., 2017, COMMUN ACM, V60, P84, DOI [DOI 10.1145/3065386, 10.1145/3065386]
[48]   Unsupervised Feature Learning Classification With Radial Basis Function Extreme Learning Machine Using Graphic Processors [J].
Lam, Dao ;
Wunsch, Donald .
IEEE TRANSACTIONS ON CYBERNETICS, 2017, 47 (01) :224-231
[49]  
Lazebnik S., COMPUTER VISION PATT, V2, P2169
[50]   Backpropagation Applied to Handwritten Zip Code Recognition [J].
LeCun, Y. ;
Boser, B. ;
Denker, J. S. ;
Henderson, D. ;
Howard, R. E. ;
Hubbard, W. ;
Jackel, L. D. .
NEURAL COMPUTATION, 1989, 1 (04) :541-551