Towards Robust Uncertainty Estimation in the Presence of Noisy Labels

被引:1
作者
Pan, Chao [1 ,2 ]
Yuan, Bo [1 ,2 ]
Zhou, Wei [3 ]
Yao, Xin [1 ,2 ]
机构
[1] Southern Univ Sci & Technol SUSTech, Shenzhen 518055, Peoples R China
[2] Southern Univ Sci & Technol SUSTech, Dept Comp Sci & Engn, Guangdong Prov Key Lab Brain Inspired Intelligent, Shenzhen 518055, Peoples R China
[3] Huawei Technol Co Ltd, Trustworthiness Theory Res Ctr, Shenzhen, Peoples R China
来源
ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2022, PT I | 2022年 / 13529卷
关键词
Uncertainty estimation; Noisy label; Out-of-distribution data; Mis-classification detection;
D O I
10.1007/978-3-031-15919-0_56
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In security-critical applications, it is essential to know how confident the model is in its predictions. Many uncertainty estimation methods have been proposed recently, and these methods are reliable when the training data do not contain labeling errors. However, we find that the quality of these uncertainty estimation methods decreases dramatically when noisy labels are present in the training data. In some datasets, the uncertainty estimates would become completely absurd, even though these labeling noises barely affect the test accuracy. We further analyze the impact of existing label noise handling methods on the reliability of uncertainty estimates, although most of these methods focus only on improving the accuracy of the models. We identify that the data cleaning-based approach can alleviate the influence of label noise on uncertainty estimates to some extent, but there are still some drawbacks. Finally, we propose a robust uncertainty estimation method under label noise. Compared with other algorithms, our approach achieves a more reliable uncertainty estimates in the presence of noisy labels, especially when there are large-scale labeling errors in the training data.
引用
收藏
页码:673 / 684
页数:12
相关论文
共 25 条
[1]   A review of uncertainty quantification in deep learning: Techniques, applications and challenges [J].
Abdar, Moloud ;
Pourpanah, Farhad ;
Hussain, Sadiq ;
Rezazadegan, Dana ;
Liu, Li ;
Ghavamzadeh, Mohammad ;
Fieguth, Paul ;
Cao, Xiaochun ;
Khosravi, Abbas ;
Acharya, U. Rajendra ;
Makarenkov, Vladimir ;
Nahavandi, Saeid .
INFORMATION FUSION, 2021, 76 :243-297
[2]   Image classification with deep learning in the presence of noisy labels: A survey [J].
Algan, Gorkem ;
Ulusoy, Ilkay .
KNOWLEDGE-BASED SYSTEMS, 2021, 215
[3]  
Amodei D, 2016, Arxiv, DOI [arXiv:1606.06565, 10.48550/arXiv.1606.06565]
[4]  
Chen Pengfei, 2019, P MACHINE LEARNING R, V97
[5]   A Survey on Deep Learning with Noisy Labels: How to train your model when you cannot trust on the annotations? [J].
Cordeiro, Filipe R. ;
Carneiro, Gustavo .
2020 33RD SIBGRAPI CONFERENCE ON GRAPHICS, PATTERNS AND IMAGES (SIBGRAPI 2020), 2020, :9-16
[6]  
Feinman R, 2017, Arxiv, DOI arXiv:1703.00410
[7]  
Gal Y, 2016, PR MACH LEARN RES, V48
[8]  
Gawlikowski J., 2022, arXiv
[9]  
Ghosh A, 2017, AAAI CONF ARTIF INTE, P1919
[10]   On the Robustness of Monte Carlo Dropout Trained with Noisy Labels [J].
Goel, Purvi ;
Li Chen .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, :2219-2228