Self-paced resistance learning against overfitting on noisy labels

被引:15
|
作者
Shi, Xiaoshuan [1 ]
Guo, Zhenhua [2 ]
Li, Kang [3 ,4 ,5 ]
Liang, Yun [6 ]
Zhu, Xiaofeng [1 ]
机构
[1] Univ Elect Sci & Technol China, Sch Comp Sci & Technol, Chengdu, Sichuan, Peoples R China
[2] Smart Transportat, Suzhou, Peoples R China
[3] Sichuan Univ, West China Hosp, West China Biomed Big Data Ctr, Chengdu, Sichuan, Peoples R China
[4] Sichuan Univ, MedX Ctr Informat, Chengdu, Sichuan, Peoples R China
[5] Shanghai Artificial Intelligence Lab, Shanghai, Peoples R China
[6] Univ Florida, J Crayton Pruitt Family Dept Biomed Engn, Gainesville, FL 32611 USA
关键词
Convolutional neural networks; Self-paced resistance; Model overfitting; Noisy labels; CLASSIFICATION;
D O I
10.1016/j.patcog.2022.109080
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Noisy labels composed of correct and corrupted ones are pervasive in practice. They might significantly deteriorate the performance of convolutional neural networks (CNNs), because CNNs are easily overfit-ted on corrupted labels. To address this issue, inspired by an observation, deep neural networks might first memorize the probably correct-label data and then corrupt-label samples, we propose a novel yet simple self-paced resistance framework to resist corrupted labels, without using any clean valida-tion data. The proposed framework first utilizes the memorization effect of CNNs to learn a curricu-lum, which contains confident samples and provides meaningful supervision for other training samples. Then it adopts selected confident samples and a proposed resistance loss to update model parameters; the resistance loss tends to smooth model parameters' update or attain equivalent prediction over each class, thereby resisting model overfitting on corrupted labels. Finally, we unify these two modules into a single loss function and optimize it in an alternative learning. Extensive experiments demonstrate the significantly superior performance of the proposed framework over recent state-of-the-art methods on noisy-label data. Source codes of the proposed method are available on https://github.com/xsshi2015/ Self- paced- Resistance-Learning .(c) 2022 Elsevier Ltd. All rights reserved.
引用
收藏
页数:12
相关论文
共 50 条
  • [31] Self-Paced Learning with Statistics Uncertainty Prior
    Guo, Lihua
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2018, E101D (03) : 812 - 816
  • [32] Self-Paced Learning: An Implicit Regularization Perspective
    Fan, Yanbo
    He, Ran
    Liang, Jian
    Hu, Baogang
    THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 1877 - 1883
  • [33] Self-paced Learning for Pedestrian Trajectory Prediction
    Wu, Ya
    Li, Bin
    Zhang, Ruiqi
    Chen, Guang
    Li, Zhijun
    Liu, Zhengfa
    2022 INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS AND MECHATRONICS (ICARM 2022), 2022, : 781 - 786
  • [34] Self-paced contrastive learning for knowledge tracing
    Dai, Huan
    Yun, Yue
    Zhang, Yupei
    An, Rui
    Zhang, Wenxin
    Shang, Xuequn
    NEUROCOMPUTING, 2024, 609
  • [35] Self-paced hierarchical metric learning (SPHML)
    Mohammed Al-taezi
    Pengfei Zhu
    Qinghua Hu
    Yu Wang
    Abdulrahman Al-badwi
    International Journal of Machine Learning and Cybernetics, 2021, 12 : 2529 - 2541
  • [36] Self-paced hierarchical metric learning (SPHML)
    Al-taezi, Mohammed
    Zhu, Pengfei
    Hu, Qinghua
    Wang, Yu
    Al-badwi, Abdulrahman
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2021, 12 (09) : 2529 - 2541
  • [37] Active Clustering Ensemble With Self-Paced Learning
    Zhou, Peng
    Sun, Bicheng
    Liu, Xinwang
    Du, Liang
    Li, Xuejun
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (09) : 12186 - 12200
  • [38] Balanced Self-Paced Learning for AUC Maximization
    Gu, Bin
    Zhang, Chenkang
    Xiong, Huan
    Huang, Heng
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 6765 - 6773
  • [39] Symmetric Self-Paced Learning for Domain Generalization
    Zhao, Di
    Koh, Yun Sing
    Dobbie, Gillian
    Hu, Hongsheng
    Fournier-Viger, Philippe
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 15, 2024, : 16961 - 16969
  • [40] Self-paced e-learning courses
    不详
    IEEE DESIGN & TEST OF COMPUTERS, 2002, 19 (01): : 5 - 5