GPU-accelerated and mixed norm regularized online extreme learning machine

被引:3
作者
Polat, Onder [1 ]
Kayhan, Sema Koc [1 ]
机构
[1] Gaziantep Univ, Dept Elect & Elect Engn, TR-27310 Gaziantep, Turkey
关键词
alternating direction method of multipliers; extreme learning machine; graphics processing unit; online sequential learning; regularization; REGRESSION; ALGORITHM;
D O I
10.1002/cpe.6967
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Extreme learning machine (ELM) is a prominent example of neural network with its fast training speed, and good prediction performance. An online version of ELM called online sequential extreme learning machine (OS-ELM) has also been proposed for the sequential training. Combined with the need for regularization to prevent over-fitting in addition to the large number of neurons required in the hidden layer, OS-ELM demands huge amount of computation power for the large-scale data. In this article, a mixed norm (l2,1$$ {l}_{2,1} $$) regularized online machine learning algorithm (MRO-ELM) that is based on alternating direction method of multipliers (ADMM) is proposed. A linear combination of the mixed norm and the Frobenius norm regularization is applied using the ADMM framework and update formulas are derived. Graphics processing unit (GPU) accelerated version of MRO-ELM (GPU-MRO-ELM) is also proposed to reduce the training time by processing appropriate parts in parallel using the implemented custom kernels. In addition, a novel automatic hyper-parameter tuning method is incorporated to GPU-MRO-ELM using progressive validation with GPU acceleration. The experimental results show that the MRO-ELM algorithm and its GPU version outperform OS-ELM in terms of training speed, and testing accuracy. Also, compared to the cross validation, the proposed automatic hyper-parameter tuning demonstrates dramatical reduction in the tuning time.
引用
收藏
页数:19
相关论文
共 50 条
  • [1] Timeliness online regularized extreme learning machine
    Luo, Xiong
    Yang, Xiaona
    Jiang, Changwei
    Ban, Xiaojuan
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2018, 9 (03) : 465 - 476
  • [2] GPU-Accelerated Parallel Hierarchical Extreme Learning Machine on Flink for Big Data
    Chen, Cen
    Li, Kenli
    Ouyang, Aijia
    Tang, Zhuo
    Li, Keqin
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2017, 47 (10): : 2740 - 2753
  • [3] Timeliness Online Regularized Extreme Learning Machine
    Luo, Xiong
    Yang, Xiaona
    Jiang, Changwei
    Ban, Xiaojuan
    PROCEEDINGS OF ELM-2015, VOL 1: THEORY, ALGORITHMS AND APPLICATIONS (I), 2016, 6 : 477 - 487
  • [4] Timeliness online regularized extreme learning machine
    Xiong Luo
    Xiaona Yang
    Changwei Jiang
    Xiaojuan Ban
    International Journal of Machine Learning and Cybernetics, 2018, 9 : 465 - 476
  • [5] GPU-accelerated approximate kernel method for quantum machine learning
    Browning, Nicholas J.
    Faber, Felix A.
    von Lilienfeld, O. Anatole
    JOURNAL OF CHEMICAL PHYSICS, 2022, 157 (21)
  • [6] Classification with Extreme Learning Machine on GPU
    Jezowicz, Toma. S.
    Gajdos, Petr
    Uher, Vojtech
    Snasel, Vaclav
    2015 INTERNATIONAL CONFERENCE ON INTELLIGENT NETWORKING AND COLLABORATIVE SYSTEMS IEEE INCOS 2015, 2015, : 116 - 122
  • [7] GPU-Accelerated Extreme Learning Machines for Imbalanced Data Streams with Concept Drift
    Krawczyk, Bartosz
    INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE 2016 (ICCS 2016), 2016, 80 : 1692 - 1701
  • [8] Regularized extreme learning machine for regression problems
    Martinez-Martinez, Jose M.
    Escandell-Montero, Pablo
    Soria-Olivas, Emilio
    Martin-Guerrero, Jose D.
    Magdalena-Benedito, Rafael
    Gomez-Sanchis, Juan
    NEUROCOMPUTING, 2011, 74 (17) : 3716 - 3721
  • [9] Smoothing Regularized Extreme Learning Machine
    Fan, Qin-Wei
    He, Xing-Shi
    Yang, Xin-She
    ENGINEERING APPLICATIONS OF NEURAL NETWORKS, EANN 2018, 2018, 893 : 83 - 93
  • [10] Incremental regularized extreme learning machine and it's enhancement
    Xu, Zhixin
    Yao, Min
    Wu, Zhaohui
    Dai, Weihui
    NEUROCOMPUTING, 2016, 174 : 134 - 142