Self-Filtering: A Noise-Aware Sample Selection for Label Noise with Confidence Penalization

被引:27
作者
Wei, Qi [1 ]
Sun, Haoliang [1 ]
Lu, Xiankai [1 ]
Yin, Yilong [1 ]
机构
[1] Shandong Univ, Sch Software, Jinan, Peoples R China
来源
COMPUTER VISION - ECCV 2022, PT XXX | 2022年 / 13690卷
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
Label noise; Sample selection; Confidence penalization;
D O I
10.1007/978-3-031-20056-4_30
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sample selection is an effective strategy to mitigate the effect of label noise in robust learning. Typical strategies commonly apply the small-loss criterion to identify clean samples. However, those samples lying around the decision boundary with large losses usually entangle with noisy examples, which would be discarded with this criterion, leading to the heavy degeneration of the generalization performance. In this paper, we propose a novel selection strategy, Self-Filtering (SFT), that utilizes the fluctuation of noisy examples in historical predictions to filter them, which can avoid the selection bias of the small-loss criterion for the boundary examples. Specifically, we introduce a memory bank module that stores the historical predictions of each example and dynamically updates to support the selection for the subsequent learning iteration. Besides, to reduce the accumulated error of the sample selection bias of SFT, we devise a regularization term to penalize the confident output distribution. By increasing the weight of the misclassified categories with this term, the loss function is robust to label noise in mild conditions. We conduct extensive experiments on three benchmarks with variant noise types and achieve the new state-of-the-art. Ablation studies and further analysis verify the virtue of SFT for sample selection in robust learning.
引用
收藏
页码:516 / 532
页数:17
相关论文
共 44 条
[1]  
Amid E, 2019, ADV NEUR IN, V32
[2]  
Arazo E, 2019, PR MACH LEARN RES, V97
[3]  
Bai Y., 2021, ICCV
[4]  
Bai Yutong, 2021, NEURIPS
[5]  
Cheng Hao, 2021, ICLR
[6]  
Cheng J., 2020, ICML
[7]  
Ghosh A, 2017, AAAI CONF ARTIF INTE, P1919
[8]   Decomposition-Based Evolutionary Multiobjective Optimization to Self-Paced Learning [J].
Gong, Maoguo ;
Li, Hao ;
Meng, Deyu ;
Miao, Qiguang ;
Liu, Jia .
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2019, 23 (02) :288-302
[9]   Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels [J].
Han, Bo ;
Yao, Quanming ;
Yu, Xingrui ;
Niu, Gang ;
Xu, Miao ;
Hu, Weihua ;
Tsang, Ivor W. ;
Sugiyama, Masashi .
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
[10]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778