Online ADMM-Based Extreme Learning Machine for Sparse Supervised Learning

被引:9
作者
Song, Tianheng [1 ]
Li, Dazi [1 ]
Liu, Zhiyin [1 ]
Yang, Weimin [2 ]
机构
[1] Beijing Univ Chem Technol, Coll Informat Sci & Technol, Beijing 100029, Peoples R China
[2] Beijing Univ Chem Technol, Coll Mech & Elect Engn, Beijing 100029, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金; 北京市自然科学基金;
关键词
Online learning; alternative direction method of multipliers (ADMM); l(1)-regularization; extreme learning machine (ELM); sparse output parameters; NEURAL-NETWORKS; CLASSIFICATION; REGULARIZATION; APPROXIMATION; ALGORITHM;
D O I
10.1109/ACCESS.2019.2915970
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Sparse learning is an efficient technique for feature selection and avoiding overfitting in machine learning research areas. Considering sparse learning for real-world problems with online learning demands in neural networks, an online sparse supervised learning of extreme learning machine (ELM) algorithm is proposed based on alternative direction method of multipliers (ADMM), termed OAL1-ELM. In OAL1-ELM, an l(1)-regularization penalty is added in loss function for generating a sparse solution to enhance the generalization ability. This convex combinatorial loss function is solved by using ADMM in a distributed way. Furthermore, an improved ADMM is used to reduce computational complexity and to achieve online learning. The proposed algorithm can learn data one-by-one or batch-by-batch. The convergence analysis for the fixed point of the solution is given to show the efficiency and optimality of the proposed method. The experimental results show that the proposed method can obtain a sparse solution and have strong generalization performance in a wide range of regression tasks, multiclass classification tasks, and a real-world industrial project.
引用
收藏
页码:64533 / 64544
页数:12
相关论文
共 50 条
  • [1] Group L1/2 Regularization for Pruning Hidden Layer Nodes of Feedforward Neural Networks
    Alemu, Habtamu Zegeye
    Zhao, Junhong
    Li, Feng
    Wu, Wei
    [J]. IEEE ACCESS, 2019, 7 : 9540 - 9557
  • [2] Fast decorrelated neural network ensembles with random weights
    Alhamdoosh, Monther
    Wang, Dianhui
    [J]. INFORMATION SCIENCES, 2014, 264 : 104 - 117
  • [3] Asadi R., 2014, AI EDAM, V30, P1
  • [4] Sparse Extreme Learning Machine for Classification
    Bai, Zuo
    Huang, Guang-Bin
    Wang, Danwei
    Wang, Han
    Westover, M. Brandon
    [J]. IEEE TRANSACTIONS ON CYBERNETICS, 2014, 44 (10) : 1858 - 1870
  • [5] 1-Norm extreme learning machine for regression and multiclass classification using Newton method
    Balasundaram, S.
    Gupta, Deepak
    Kapil
    [J]. NEUROCOMPUTING, 2014, 128 : 4 - 14
  • [6] Dynamic feature scaling for online learning of binary classifiers
    Bollegala, Danushka
    [J]. KNOWLEDGE-BASED SYSTEMS, 2017, 129 : 97 - 105
  • [7] The ODE method for convergence of stochastic approximation and reinforcement learning
    Borkar, VS
    Meyn, SP
    [J]. SIAM JOURNAL ON CONTROL AND OPTIMIZATION, 2000, 38 (02) : 447 - 469
  • [8] Distributed optimization and statistical learning via the alternating direction method of multipliers
    Boyd S.
    Parikh N.
    Chu E.
    Peleato B.
    Eckstein J.
    [J]. Foundations and Trends in Machine Learning, 2010, 3 (01): : 1 - 122
  • [9] Broad Learning System: An Effective and Efficient Incremental Learning System Without the Need for Deep Architecture
    Chen, C. L. Philip
    Liu, Zhulin
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (01) : 10 - 24
  • [10] Functional Link Adaptive Filters for Nonlinear Acoustic Echo Cancellation
    Comminiello, Danilo
    Scarpiniti, Michele
    Azpicueta-Ruiz, Luis A.
    Arenas-Garcia, Jeronimo
    Uncini, Aurelio
    [J]. IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2013, 21 (07): : 1502 - 1512