Generalized robust loss functions for machine learning

被引:8
|
作者
Fu, Saiji [1 ]
Wang, Xiaoxiao [2 ,6 ,7 ]
Tang, Jingjing [3 ,4 ]
Lan, Shulin [6 ]
Tian, Yingjie [5 ,6 ,7 ,8 ]
机构
[1] Beijing Univ Posts & Telecommun, Sch Econ & Management, Beijing 100876, Peoples R China
[2] Univ Chinese Acad Sci, Sch Math Sci, Beijing 100049, Peoples R China
[3] Southwestern Univ Finance & Econ, Fac Business Adm, Sch Business Adm, Chengdu 611130, Peoples R China
[4] Southwestern Univ Finance & Econ, Inst Big Data, Chengdu 611130, Peoples R China
[5] Univ Chinese Acad Sci, Sch Econ & Management, Beijing 100190, Peoples R China
[6] Chinese Acad Sci, Res Ctr Fictitious Econ & Data Sci, Beijing 100190, Peoples R China
[7] Chinese Acad Sci, Key Lab Big Data Min & Knowledge Management, Beijing 100190, Peoples R China
[8] UCAS, MOE Social Sci Lab Digital Econ Forecasts & Policy, Beijing 100190, Peoples R China
基金
中国国家自然科学基金;
关键词
Robust loss function; Machine learning; Kernel classifier; Flattened Squares loss function; Flattened Hinge loss function; SUPPORT VECTOR REGRESSION; RAMP-LOSS; CLASSIFICATION; SPARSE;
D O I
10.1016/j.neunet.2023.12.013
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Loss function is a critical component of machine learning. Some robust loss functions are proposed to mitigate the adverse effects caused by noise. However, they still face many challenges. Firstly, there is currently a lack of unified frameworks for building robust loss functions in machine learning. Secondly, most of them only care about the occurring noise and pay little attention to those normal points. Thirdly, the resulting performance gain is limited. To this end, we put forward a general framework of robust loss functions for machine learning (RML) with rigorous theoretical analyses, which can smoothly and adaptively flatten any unbounded loss function and apply to various machine learning problems. In RML, an unbounded loss function serves as the target, with the aim of being flattened. A scale parameter is utilized to limit the maximum value of noise points, while a shape parameter is introduced to control both the compactness and the growth rate of the flattened loss function. Later, this framework is employed to flatten the Hinge loss function and the Square loss function. Based on this, we build two robust kernel classifiers called FHSVM and FLSSVM, which can distinguish different types of data. The stochastic variance reduced gradient (SVRG) approach is used to optimize FHSVM and FLSSVM. Extensive experiments demonstrate their superiority, with both consistently occupying the top two positions among all evaluated methods, achieving an average accuracy of 81.07% (accompanied by an F-score of 73.25%) for FHSVM and 81.54% (with an F-score of 75.71%) for FLSSVM.
引用
收藏
页码:200 / 214
页数:15
相关论文
共 50 条
  • [1] A Comprehensive Survey of Loss Functions in Machine Learning
    Wang Q.
    Ma Y.
    Zhao K.
    Tian Y.
    Annals of Data Science, 2022, 9 (02) : 187 - 212
  • [2] Robust statistical learning with Lipschitz and convex loss functions
    Geoffrey Chinot
    Guillaume Lecué
    Matthieu Lerasle
    Probability Theory and Related Fields, 2020, 176 : 897 - 940
  • [3] Robust Extreme Learning Machines with Different Loss Functions
    Ren, Zhuo
    Yang, Liming
    NEURAL PROCESSING LETTERS, 2019, 49 (03) : 1543 - 1565
  • [4] Robust Extreme Learning Machines with Different Loss Functions
    Zhuo Ren
    Liming Yang
    Neural Processing Letters, 2019, 49 : 1543 - 1565
  • [5] Robust statistical learning with Lipschitz and convex loss functions
    Chinot, Geoffrey
    Lecue, Guillaume
    Lerasle, Matthieu
    PROBABILITY THEORY AND RELATED FIELDS, 2020, 176 (3-4) : 897 - 940
  • [6] Machine learning models, epistemic set-valued data and generalized loss functions: An encompassing approach
    Couso, Ines
    Sanchez, Luciano
    INFORMATION SCIENCES, 2016, 358 : 129 - 150
  • [7] Robust support vector machine with generalized quantile loss for classification and regression
    Yang, Liming
    Dong, Hongwei
    APPLIED SOFT COMPUTING, 2019, 81
  • [8] Robust Support Vector Machine With Asymmetric Truncated Generalized Pinball Loss
    Suppalap, Siwakon
    Wangkeeree, Rabian
    IEEE ACCESS, 2024, 12 : 155696 - 155717
  • [9] Chemical machine learning with kernels: The impact of loss functions
    Quang Van Nguyen
    De, Sandip
    Lin, Junhong
    Cevher, Volkan
    INTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, 2019, 119 (09)
  • [10] An investigation for loss functions widely used in machine learning
    Nie, Feiping
    Hu, Zhanxuan
    Li, Xuelong
    COMMUNICATIONS IN INFORMATION AND SYSTEMS, 2018, 18 (01) : 37 - 52