Towards Robust Uncertainty Estimation in the Presence of Noisy Labels

被引:1
作者
Pan, Chao [1 ,2 ]
Yuan, Bo [1 ,2 ]
Zhou, Wei [3 ]
Yao, Xin [1 ,2 ]
机构
[1] Southern Univ Sci & Technol SUSTech, Shenzhen 518055, Peoples R China
[2] Southern Univ Sci & Technol SUSTech, Dept Comp Sci & Engn, Guangdong Prov Key Lab Brain Inspired Intelligent, Shenzhen 518055, Peoples R China
[3] Huawei Technol Co Ltd, Trustworthiness Theory Res Ctr, Shenzhen, Peoples R China
来源
ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2022, PT I | 2022年 / 13529卷
关键词
Uncertainty estimation; Noisy label; Out-of-distribution data; Mis-classification detection;
D O I
10.1007/978-3-031-15919-0_56
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In security-critical applications, it is essential to know how confident the model is in its predictions. Many uncertainty estimation methods have been proposed recently, and these methods are reliable when the training data do not contain labeling errors. However, we find that the quality of these uncertainty estimation methods decreases dramatically when noisy labels are present in the training data. In some datasets, the uncertainty estimates would become completely absurd, even though these labeling noises barely affect the test accuracy. We further analyze the impact of existing label noise handling methods on the reliability of uncertainty estimates, although most of these methods focus only on improving the accuracy of the models. We identify that the data cleaning-based approach can alleviate the influence of label noise on uncertainty estimates to some extent, but there are still some drawbacks. Finally, we propose a robust uncertainty estimation method under label noise. Compared with other algorithms, our approach achieves a more reliable uncertainty estimates in the presence of noisy labels, especially when there are large-scale labeling errors in the training data.
引用
收藏
页码:673 / 684
页数:12
相关论文
共 50 条
  • [31] Don’t worry about noisy labels in soft shadow detection
    Xian-Tao Wu
    Wen Wu
    Lin-Lin Zhang
    Yi Wan
    The Visual Computer, 2023, 39 : 6297 - 6308
  • [32] Learning Latent Stable Patterns for Image Understanding With Weak and Noisy Labels
    Yao, Yiyang
    Luo, Wang
    Zhang, Luming
    Yang, Yi
    Li, Ping
    Zimmermann, Roger
    Shao, Ling
    IEEE TRANSACTIONS ON CYBERNETICS, 2019, 49 (12) : 4243 - 4252
  • [33] Meta-Learning for Decoding Neural Activity Data With Noisy Labels
    Xu, Dongfang
    Chen, Rong
    FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, 2022, 16
  • [34] Subclass consistency regularization for learning with noisy labels based on contrastive learning
    Sun, Xinkai
    Zhang, Sanguo
    NEUROCOMPUTING, 2025, 614
  • [35] Don't worry about noisy labels in soft shadow detection
    Wu, Xian-Tao
    Wu, Wen
    Zhang, Lin-Lin
    Wan, Yi
    VISUAL COMPUTER, 2023, 39 (12) : 6297 - 6308
  • [36] Deep learning with noisy labels in medical prediction problems: a scoping review
    Wei, Yishu
    Deng, Yu
    Sun, Cong
    Lin, Mingquan
    Jiang, Hongmei
    Peng, Yifan
    JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, 2024, 31 (07) : 1596 - 1607
  • [37] Uncertainty estimation in robust tracking control of robot manipulators using the Fourier series expansion
    Khorashadizadeh, Saeed
    Fateh, Mohammad Mehdi
    ROBOTICA, 2017, 35 (02) : 310 - 336
  • [38] Labels Are Not Perfect: Inferring Spatial Uncertainty in Object Detection
    Feng, Di
    Wang, Zining
    Zhou, Yiyang
    Rosenbaum, Lars
    Timm, Fabian
    Dietmayer, Klaus
    Tomizuka, Masayoshi
    Zhan, Wei
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (08) : 9981 - 9994
  • [39] Robust Point Cloud Segmentation With Noisy Annotations
    Ye, Shuquan
    Chen, Dongdong
    Han, Songfang
    Liao, Jing
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (06) : 7696 - 7710
  • [40] Teacher/Student Deep Semi-Supervised Learning for Training with Noisy Labels
    Hailat, Zeyad
    Chen, Xue-Wen
    2018 17TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA), 2018, : 907 - 912