Fine-Grained Classification with Noisy Labels

被引:33
作者
Wei, Qi [1 ]
Feng, Lei [2 ]
Sun, Haoliang [1 ]
Wang, Ren [1 ]
Guo, Chenhui [1 ]
Yin, Yilong [1 ]
机构
[1] Shandong Univ, Sch Software, Jinan, Peoples R China
[2] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore, Singapore
来源
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2023年
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
D O I
10.1109/CVPR52729.2023.01121
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning with noisy labels (LNL) aims to ensure model generalization given a label-corrupted training set. In this work, we investigate a rarely studied scenario of LNL on fine-grained datasets (LNL-FG), which is more practical and challenging as large inter-class ambiguities among fine-grained classes cause more noisy labels. We empirically show that existing methods that work well for LNL fail to achieve satisfying performance for LNL-FG, arising the practical need of effective solutions for LNL-FG. To this end, we propose a novel framework called stochastic noise-tolerated supervised contrastive learning (SNSCL) that confronts label noise by encouraging distinguishable representation. Specifically, we design a noise-tolerated supervised contrastive learning loss that incorporates a weight-aware mechanism for noisy label correction and selectively updating momentum queue lists. By this mechanism, we mitigate the effects of noisy anchors and avoid inserting noisy labels into the momentum-updated queue. Besides, to avoid manually-defined augmentation strategies in contrastive learning, we propose an efficient stochastic module that samples feature embeddings from a generated distribution, which can also enhance the representation ability of deep models. SNSCL is general and compatible with prevailing robust LNL strategies to improve their performance for LNL-FG. Extensive experiments demonstrate the effectiveness of SNSCL.
引用
收藏
页码:11651 / 11660
页数:10
相关论文
共 61 条
[1]  
[Anonymous], 2021, CVPR, DOI DOI 10.1109/CVPR46437.2021.00905
[2]  
[Anonymous], 2021, AAAI
[3]  
[Anonymous], 2018, NEURIPS
[4]  
Bai Y., 2021, ICCV
[5]  
Bossard L, 2014, LECT NOTES COMPUT SC, V8694, P446, DOI 10.1007/978-3-319-10599-4_29
[6]  
Bukchin G, 2021, CVPR
[7]  
Chen T., 2020, ARXIV
[8]  
Chen Ting., 2020, Advances in Neural Information Processing Systems (NeurIPS)
[9]  
Cheng H., 2021, ICLR
[10]  
Cheng Hao, 2023, ICLR