Big data regression with parallel enhanced and convex incremental extreme learning machines

被引:3
作者
Kokkinos, Yiannis [1 ]
Margaritis, Konstantinos G. [1 ]
机构
[1] Univ Macedonia, Dept Appl Informat, Parallel & Distributed Proc Lab, 156 Egnatia Str,POB 1591, Thessaloniki 54006, Greece
关键词
data parallelism; enhanced convex; extreme learning machine; incremental; regression; APPROXIMATION;
D O I
10.1111/coin.12136
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This work considers scalable incremental extreme learning machine (I-ELM) algorithms, which could be suitable for big data regression. During the training of I-ELMs, the hidden neurons are presented one by one, and the weights are based solely on simple direct summations, which can be most efficiently mapped on parallel environments. Existing incremental versions of ELMs are the I-ELM, enhanced incremental ELM (EI-ELM), and convex incremental ELM (CI-ELM). We study the enhanced and convex incremental ELM (ECI-ELM) algorithm, which is a combination of the last 2 versions. The main findings are that ECI-ELM is fast, accurate, and fully scalable when it operates in a parallel system of distributed memory workstations. Experimental simulations on several benchmark data sets demonstrate that the ECI-ELM is the most accurate among the existing I-ELM, EI-ELM, and CI-ELM algorithms. We also analyze the convergence as a function of the hidden neurons and demonstrate that ECI-ELM has the lowest error rate curve and converges much faster than the other algorithms in all of the data sets. The parallel simulations also reveal that the data parallel training of the ECI-ELM can guarantee simplicity and straightforward mappings and can deliver speedups and scale-ups very close to linear.
引用
收藏
页码:875 / 894
页数:20
相关论文
共 32 条
[21]   Stochastic gradient based extreme learning machines for stable online learning of advanced combustion engines [J].
Janakiraman, Vijay Manikandan ;
Nguyen, XuanLong ;
Assanis, Dennis .
NEUROCOMPUTING, 2016, 177 :304-316
[22]   An enhanced extreme learning machine based on ridge regression for regression [J].
Li, Guoqiang ;
Niu, Peifeng .
NEURAL COMPUTING & APPLICATIONS, 2013, 22 (3-4) :803-810
[23]   Multitask Extreme Learning Machine for Visual Tracking [J].
Liu, Huaping ;
Sun, Fuchun ;
Yu, Yuanlong .
COGNITIVE COMPUTATION, 2014, 6 (03) :391-404
[24]   OP-ELM: Optimally Pruned Extreme Learning Machine [J].
Miche, Yoan ;
Sorjamaa, Antti ;
Bas, Patrick ;
Simula, Olli ;
Jutten, Christian ;
Lendasse, Amaury .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2010, 21 (01) :158-162
[25]   Online Sequential Fuzzy Extreme Learning Machine for Function Approximation and Classification Problems [J].
Rong, Hai-Jun ;
Huang, Guang-Bin ;
Sundararajan, N. ;
Saratchandran, P. .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2009, 39 (04) :1067-1072
[26]   Parallel approaches to machine learning - A comprehensive survey [J].
Upadhyaya, Sujatha R. .
JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2013, 73 (03) :284-292
[27]   GPU-accelerated and parallelized ELM ensembles for large-scale regression [J].
van Heeswijk, Mark ;
Miche, Yoan ;
Oja, Erkki ;
Lendasse, Amaury .
NEUROCOMPUTING, 2011, 74 (16) :2430-2437
[28]   Parallelized extreme learning machine ensemble based on min-max modular network [J].
Wang, Xiao-Lin ;
Chen, Yang-Yang ;
Zhao, Hai ;
Lu, Bao-Liang .
NEUROCOMPUTING, 2014, 128 :31-41
[29]   Data Mining with Big Data [J].
Wu, Xindong ;
Zhu, Xingquan ;
Wu, Gong-Qing ;
Ding, Wei .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2014, 26 (01) :97-107
[30]   ELM*: distributed extreme learning machine with MapReduce [J].
Xin, Junchang ;
Wang, Zhiqiong ;
Chen, Chen ;
Ding, Linlin ;
Wang, Guoren ;
Zhao, Yuhai .
WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2014, 17 (05) :1189-1204