Big data regression with parallel enhanced and convex incremental extreme learning machines

被引:3
作者
Kokkinos, Yiannis [1 ]
Margaritis, Konstantinos G. [1 ]
机构
[1] Univ Macedonia, Dept Appl Informat, Parallel & Distributed Proc Lab, 156 Egnatia Str,POB 1591, Thessaloniki 54006, Greece
关键词
data parallelism; enhanced convex; extreme learning machine; incremental; regression; APPROXIMATION;
D O I
10.1111/coin.12136
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This work considers scalable incremental extreme learning machine (I-ELM) algorithms, which could be suitable for big data regression. During the training of I-ELMs, the hidden neurons are presented one by one, and the weights are based solely on simple direct summations, which can be most efficiently mapped on parallel environments. Existing incremental versions of ELMs are the I-ELM, enhanced incremental ELM (EI-ELM), and convex incremental ELM (CI-ELM). We study the enhanced and convex incremental ELM (ECI-ELM) algorithm, which is a combination of the last 2 versions. The main findings are that ECI-ELM is fast, accurate, and fully scalable when it operates in a parallel system of distributed memory workstations. Experimental simulations on several benchmark data sets demonstrate that the ECI-ELM is the most accurate among the existing I-ELM, EI-ELM, and CI-ELM algorithms. We also analyze the convergence as a function of the hidden neurons and demonstrate that ECI-ELM has the lowest error rate curve and converges much faster than the other algorithms in all of the data sets. The parallel simulations also reveal that the data parallel training of the ECI-ELM can guarantee simplicity and straightforward mappings and can deliver speedups and scale-ups very close to linear.
引用
收藏
页码:875 / 894
页数:20
相关论文
共 32 条
[1]   High-Performance Extreme Learning Machines: A Complete Toolbox for Big Data Applications [J].
Akusok, Anton ;
Bjork, Kaj-Mikael ;
Miche, Yoan ;
Lendasse, Amaury .
IEEE ACCESS, 2015, 3 :1011-1025
[2]  
[Anonymous], ESANN2010
[3]  
[Anonymous], 2003, Introduction to Parallel Computing
[4]  
[Anonymous], 1997, Parallel programming with MPI
[5]  
[Anonymous], 2004, Dark Victory: How a Government Lied Its Way to Political Triumph
[6]   Large-Scale Machine Learning with Stochastic Gradient Descent [J].
Bottou, Leon .
COMPSTAT'2010: 19TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL STATISTICS, 2010, :177-186
[7]   Extreme Learning Machines [J].
Cambria, Erik ;
Huang, Guang-Bin .
IEEE INTELLIGENT SYSTEMS, 2013, 28 (06) :30-31
[8]   Big Data: A Survey [J].
Chen, Min ;
Mao, Shiwen ;
Liu, Yunhao .
MOBILE NETWORKS & APPLICATIONS, 2014, 19 (02) :171-209
[9]  
Engelbrecht A.P, 2007, Computational Intelligence an Introduction, Vsecond
[10]   Error Minimized Extreme Learning Machine With Growth of Hidden Nodes and Incremental Learning [J].
Feng, Guorui ;
Huang, Guang-Bin ;
Lin, Qingping ;
Gay, Robert .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2009, 20 (08) :1352-1357