Robust Extreme Learning Machine with Exponential Squared Loss via DC Programming

被引:0
作者
Wang, Kuaini [1 ,2 ]
Wang, Xiaoxue [3 ]
Zhan, Weicheng [3 ]
Wang, Mingming [3 ]
Cao, Jinde [1 ,4 ]
机构
[1] Southeast Univ, Sch Math, Nanjing, Peoples R China
[2] Xian Shiyou Univ, Coll Sci, Xian, Peoples R China
[3] Xian Shiyou Univ, Sch Comp Sci, Xian, Peoples R China
[4] Yonsei Univ, Yonsei Frontier Lab, Seoul, South Korea
关键词
Extreme learning machine; exponential squared loss; DC programming; DCA; robust regression; LABEL NOISE; CLASSIFICATION; REGRESSION; CAPABILITY; NONCONVEX;
D O I
10.14569/IJACSA.2023.01412109
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Extreme learning machines (ELM) have recently attracted considerable attention because of its fast learning rate, simple model structure, and good generalization ability. However, classical ELM with least squares loss function is prone to overfitting and lack robustness in dealing with datasets containing noise and outliers in the real world. In this paper, inspired by the maximum correntropy criterion, an exponential squared loss function is introduced, which is nonconvex and insensitive to noise and outliers. A robust ELM with exponential squared loss (RESELM) is presented to overcome the overfitting problem. The proposed model with nonconvexity is difficult to be directly optimized. Considering the superior performance of difference of convex functions (DC) programming in solving nonconvex problems, this paper optimizes the model by expressing the objective function as a DC function and employing DC algorithm (DCA). To examine the effectiveness of the proposed algorithm in noisy environment, different levels of outliers are added to the training samples in the experiments. Experimental results on benchmark data sets with different outliers levels illustrate that the proposed RESELM achieves significant advantages in generalization performance and robustness, especially in higher outliers levels.
引用
收藏
页码:1066 / 1074
页数:9
相关论文
共 28 条
  • [1] The DC (difference of convex functions) programming and DCA revisited with DC models of real world nonconvex optimization problems
    An, LTH
    Tao, PD
    [J]. ANNALS OF OPERATIONS RESEARCH, 2005, 133 (1-4) : 23 - 46
  • [2] Boukerche A, 2020, ACM COMPUT SURV, V53, DOI [10.1145/3381028, 10.1145/3421763]
  • [3] Automatic Detection and Classification of Mammograms Using Improved Extreme Learning Machine with Deep Learning
    Chakravarthy, Sannasi S. R.
    Rajaguru, H.
    [J]. IRBM, 2022, 43 (01) : 49 - 61
  • [4] Robust regularized extreme learning machine for regression using iteratively reweighted least squares
    Chen, Kai
    Lv, Qi
    Lu, Yao
    Dou, Yong
    [J]. NEUROCOMPUTING, 2017, 230 : 345 - 358
  • [5] Regularized Extreme Learning Machine
    Deng, Wanyu
    Zheng, Qinghua
    Chen, Lin
    [J]. 2009 IEEE SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DATA MINING, 2009, : 389 - 395
  • [6] Robust Support Vector Machines for Classification with Nonconvex and Smooth Losses
    Feng, Yunlong
    Yang, Yuning
    Huang, Xiaolin
    Mehrkanoon, Siamak
    Suykens, Johan A. K.
    [J]. NEURAL COMPUTATION, 2016, 28 (06) : 1217 - 1247
  • [7] Classification in the Presence of Label Noise: a Survey
    Frenay, Benoit
    Verleysen, Michel
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2014, 25 (05) : 845 - 869
  • [8] Ghosh A, 2017, AAAI CONF ARTIF INTE, P1919
  • [9] Root-mean-square error (RMSE) or mean absolute error (MAE): when to use them or not
    Hodson, Timothy O.
    [J]. GEOSCIENTIFIC MODEL DEVELOPMENT, 2022, 15 (14) : 5481 - 5487
  • [10] Robust extreme learning machine
    Horata, Punyaphol
    Chiewchanwattana, Sirapat
    Sunat, Khamron
    [J]. NEUROCOMPUTING, 2013, 102 : 31 - 44