Non-iterative and Fast Deep Learning: Multilayer Extreme Learning Machines

被引:153
作者
Zhang, Jie [1 ]
Li, Yanjiao [2 ]
Xiao, Wendong [3 ]
Zhang, Zhiqiang [4 ]
机构
[1] Peking Univ, Sch Elect Engn & Comp Sci, Beijing 100871, Peoples R China
[2] Beijing Inst Technol, Sch Informat & Elect, Beijing 100081, Peoples R China
[3] Univ Sci & Technol Beijing, Sch Automat & Elect Engn, Beijing 100083, Peoples R China
[4] Univ Leeds, Sch Elect & Elect Engn, Leeds LS2 9JT, W Yorkshire, England
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
NEURAL-NETWORKS; MIXED SELECTIVITY; RECOGNITION; CLASSIFICATION; ALGORITHM; REPRESENTATIONS; APPROXIMATION; ARCHITECTURE; PREDICTION; FEATURES;
D O I
10.1016/j.jfranklin.2020.04.033
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In the past decade, deep learning techniques have powered many aspects of our daily life, and drawn ever-increasing research interests. However, conventional deep learning approaches, such as deep belief network (DBN), restricted Boltzmann machine (RBM), and convolutional neural network (CNN), suffer from time-consuming training process due to fine-tuning of a large number of parameters and the complicated hierarchical structure. Furthermore, the above complication makes it difficult to theoretically analyze and prove the universal approximation of those conventional deep learning approaches. In order to tackle the issues, multilayer extreme learning machines (ML-ELM) were proposed, which accelerate the development of deep learning. Compared with conventional deep learning, ML-ELMs are non-iterative and fast due to the random feature mapping mechanism. In this paper, we perform a thorough review on the development of ML-ELMs, including stacked ELM autoencoder (ELM-AE), residual ELM, and local receptive field based ELM (ELM-LRF), as well as address their applications. In addition, we also discuss the connection between random neural networks and conventional deep learning. (C) 2020 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.
引用
收藏
页码:8925 / 8955
页数:31
相关论文
共 141 条
[81]   Towards enhancing stacked extreme learning machine with sparse autoencoder by correntropy [J].
Luo, Xiong ;
Xu, Yang ;
Wang, Weiping ;
Yuan, Manman ;
Ban, Xiaojuan ;
Zhu, Yueqin ;
Zhao, Wenbing .
JOURNAL OF THE FRANKLIN INSTITUTE-ENGINEERING AND APPLIED MATHEMATICS, 2018, 355 (04) :1945-1966
[82]   Deep and Wide: Multiple Layers in Automatic Speech Recognition [J].
Morgan, Nelson .
IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2012, 20 (01) :7-13
[83]  
Mriza B., 2016, P ELM 2015, P39
[84]   Deep extreme learning machine with leaky rectified linear unit for multiclass classification of pathological brain images [J].
Nayak, Deepak Ranjan ;
Das, Dibyasundar ;
Dash, Ratnakar ;
Majhi, Snehashis ;
Majhi, Banshidhar .
MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (21-22) :15381-15396
[85]   Extreme Learning Machine-Based Deep Model for Human Activity Recognition With Wearable Sensors [J].
Niu, Xiaopeng ;
Wang, Zhiliang ;
Pan, Zhigeng .
COMPUTING IN SCIENCE & ENGINEERING, 2019, 21 (05) :16-25
[86]   LEARNING AND GENERALIZATION CHARACTERISTICS OF THE RANDOM VECTOR FUNCTIONAL-LINK NET [J].
PAO, YH ;
PARK, GH ;
SOBAJIC, DJ .
NEUROCOMPUTING, 1994, 6 (02) :163-180
[87]   FUNCTIONAL-LINK NET COMPUTING - THEORY, SYSTEM ARCHITECTURE, AND FUNCTIONALITIES [J].
PAO, YH ;
TAKEFUJI, Y .
COMPUTER, 1992, 25 (05) :76-79
[88]  
Prieto A., 2014, NEUROCOMPTUING, V214, P242
[89]  
Ribeiro B., 2013, P IB C PATT REC, P182
[90]   The importance of mixed selectivity in complex cognitive tasks [J].
Rigotti, Mattia ;
Barak, Omri ;
Warden, Melissa R. ;
Wang, Xiao-Jing ;
Daw, Nathaniel D. ;
Miller, Earl K. ;
Fusi, Stefano .
NATURE, 2013, 497 (7451) :585-590