Elastic extreme learning machine for big data classification

被引:64
作者
Xin, Junchang [1 ]
Wang, Zhiqiong [2 ]
Qu, Luxuan [2 ]
Wang, Guoren [1 ]
机构
[1] Northeastern Univ, Coll Informat Sci & Engn, Shenyang, Peoples R China
[2] Northeastern Univ, Sino Dutch Biomed & Informat Engn Sch, Shenyang, Peoples R China
基金
中国国家自然科学基金;
关键词
Extreme learning machine; Big data classification; Incremental learning; Decremental learning; Correctional learning; REGRESSION; MAPREDUCE; FRAMEWORK; NETWORKS; ELM;
D O I
10.1016/j.neucom.2013.09.075
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Extreme Learning Machine (ELM) and its variants have been widely used for many applications due to its fast convergence and good generalization performance. Though the distributed ELM* based on MapReduce framework can handle very large scale training dataset in big data applications, how to cope with its rapidly updating is still a challenging task. Therefore, in this paper, a novel Elastic Extreme Learning Machine based on MapReduce framework, named Elastic ELM ((ELM)-L-2), is proposed to cover the shortage of ELM* whose learning ability is weak to the updated large-scale training dataset. Firstly, after analyzing the property of ELM* adequately, it can be found out that its most computation-expensive part, matrix multiplication, can be incrementally, decrementally and correctionally calculated. Next, the Elastic ELM based on MapReduce framework is developed, which first calculates the intermediate matrix multiplications of the updated training data subset, and then update the matrix multiplications by modifying the old matrix multiplications with the intermediate ones. Then, the corresponding new output weight vector can be obtained with centralized computing using the update the matrix multiplications. Therefore, the efficient learning of rapidly updated massive training dataset can be realized effectively. Finally, we conduct extensive experiments on synthetic data to verify the effectiveness and efficiency of our proposed (ELM)-L-2 in learning massive rapidly updated training dataset with various experimental settings. (C) 2014 Elsevier B.V. All rights reserved.
引用
收藏
页码:464 / 471
页数:8
相关论文
共 30 条
  • [1] Dean J, 2004, USENIX ASSOCIATION PROCEEDINGS OF THE SIXTH SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION (OSDE '04), P137
  • [2] MapReduce: A Flexible Data Processing Tool
    Dean, Jeffrey
    Ghemawat, Sanjay
    [J]. COMMUNICATIONS OF THE ACM, 2010, 53 (01) : 72 - 77
  • [3] Garlasu D, 2013, 2013 ROEDUNET INTERNATIONAL CONFERENCE (ROEDUNET): NETWORKING IN EDUCATION, 11TH EDITION
  • [4] Parallel extreme learning machine for regression based on MapReduce
    He, Qing
    Shang, Tianfeng
    Zhuang, Fuzhen
    Shi, Zhongzhi
    [J]. NEUROCOMPUTING, 2013, 102 : 52 - 58
  • [5] Huang GB, 2005, PROCEEDINGS OF THE IASTED INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE, P232
  • [6] Enhanced random search based incremental extreme learning machine
    Huang, Guang-Bin
    Chen, Lei
    [J]. NEUROCOMPUTING, 2008, 71 (16-18) : 3460 - 3468
  • [7] Convex incremental extreme learning machine
    Huang, Guang-Bin
    Chen, Lei
    [J]. NEUROCOMPUTING, 2007, 70 (16-18) : 3056 - 3062
  • [8] Extreme learning machine: Theory and applications
    Huang, Guang-Bin
    Zhu, Qin-Yu
    Siew, Chee-Kheong
    [J]. NEUROCOMPUTING, 2006, 70 (1-3) : 489 - 501
  • [9] Universal approximation using incremental constructive feedforward networks with random hidden nodes
    Huang, Guang-Bin
    Chen, Lei
    Siew, Chee-Kheong
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 2006, 17 (04): : 879 - 892
  • [10] Extreme Learning Machine for Regression and Multiclass Classification
    Huang, Guang-Bin
    Zhou, Hongming
    Ding, Xiaojian
    Zhang, Rui
    [J]. IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2012, 42 (02): : 513 - 529