Online ADMM-Based Extreme Learning Machine for Sparse Supervised Learning

被引:9
作者
Song, Tianheng [1 ]
Li, Dazi [1 ]
Liu, Zhiyin [1 ]
Yang, Weimin [2 ]
机构
[1] Beijing Univ Chem Technol, Coll Informat Sci & Technol, Beijing 100029, Peoples R China
[2] Beijing Univ Chem Technol, Coll Mech & Elect Engn, Beijing 100029, Peoples R China
基金
中国博士后科学基金; 中国国家自然科学基金; 北京市自然科学基金;
关键词
Online learning; alternative direction method of multipliers (ADMM); l(1)-regularization; extreme learning machine (ELM); sparse output parameters; NEURAL-NETWORKS; CLASSIFICATION; REGULARIZATION; APPROXIMATION; ALGORITHM;
D O I
10.1109/ACCESS.2019.2915970
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Sparse learning is an efficient technique for feature selection and avoiding overfitting in machine learning research areas. Considering sparse learning for real-world problems with online learning demands in neural networks, an online sparse supervised learning of extreme learning machine (ELM) algorithm is proposed based on alternative direction method of multipliers (ADMM), termed OAL1-ELM. In OAL1-ELM, an l(1)-regularization penalty is added in loss function for generating a sparse solution to enhance the generalization ability. This convex combinatorial loss function is solved by using ADMM in a distributed way. Furthermore, an improved ADMM is used to reduce computational complexity and to achieve online learning. The proposed algorithm can learn data one-by-one or batch-by-batch. The convergence analysis for the fixed point of the solution is given to show the efficiency and optimality of the proposed method. The experimental results show that the proposed method can obtain a sparse solution and have strong generalization performance in a wide range of regression tasks, multiclass classification tasks, and a real-world industrial project.
引用
收藏
页码:64533 / 64544
页数:12
相关论文
共 50 条
[31]   Feed forward neural network with random quaternionic neurons [J].
Minemoto, Toshifumi ;
Isokawa, Teijiro ;
Nishimura, Haruhiko ;
Matsui, Nobuyuki .
SIGNAL PROCESSING, 2017, 136 :59-68
[32]   Human-level control through deep reinforcement learning [J].
Mnih, Volodymyr ;
Kavukcuoglu, Koray ;
Silver, David ;
Rusu, Andrei A. ;
Veness, Joel ;
Bellemare, Marc G. ;
Graves, Alex ;
Riedmiller, Martin ;
Fidjeland, Andreas K. ;
Ostrovski, Georg ;
Petersen, Stig ;
Beattie, Charles ;
Sadik, Amir ;
Antonoglou, Ioannis ;
King, Helen ;
Kumaran, Dharshan ;
Wierstra, Daan ;
Legg, Shane ;
Hassabis, Demis .
NATURE, 2015, 518 (7540) :529-533
[33]   Prediction of feed abrasive value by artificial neural networks and multiple linear regression [J].
Norouzian, M. A. ;
Asadpour, S. .
NEURAL COMPUTING & APPLICATIONS, 2012, 21 (05) :905-909
[34]   LEARNING AND GENERALIZATION CHARACTERISTICS OF THE RANDOM VECTOR FUNCTIONAL-LINK NET [J].
PAO, YH ;
PARK, GH ;
SOBAJIC, DJ .
NEUROCOMPUTING, 1994, 6 (02) :163-180
[35]   FUNCTIONAL-LINK NET COMPUTING - THEORY, SYSTEM ARCHITECTURE, AND FUNCTIONALITIES [J].
PAO, YH ;
TAKEFUJI, Y .
COMPUTER, 1992, 25 (05) :76-79
[36]  
Rujirakul K, 2018, PROC INT WORKSH ADV
[37]   Randomness in neural networks: an overview [J].
Scardapane, Simone ;
Wang, Dianhui .
WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY, 2017, 7 (02)
[38]  
Schmidt M, 2007, LECT NOTES ARTIF INT, V4701, P286
[39]   Output feedback control of nonlinear systems using RBF neural networks [J].
Seshagiri, S ;
Khalil, HK .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2000, 11 (01) :69-79
[40]   An online sequential learning algorithm for regularized Extreme Learning Machine [J].
Shao, Zhifei ;
Er, Meng Joo .
NEUROCOMPUTING, 2016, 173 :778-788