Learning With Noisy Labels by Semantic and Feature Space Collaboration

被引:0
作者
Lin, Han [1 ,2 ]
Li, Yingjian [2 ]
Zhang, Zheng [1 ,2 ]
Zhu, Lei [3 ]
Xu, Yong [1 ,2 ]
机构
[1] Harbin Inst Technol, Sch Comp Sci & Technol, Shenzhen 518055, Peoples R China
[2] Peng Cheng Lab, Shenzhen 518000, Peoples R China
[3] Tongji Univ, Sch Elect & Informat Engn, Shanghai 200070, Peoples R China
基金
中国国家自然科学基金;
关键词
Noise measurement; Semantics; Prototypes; Self-supervised learning; Collaboration; Training; Robustness; Label noise; collaborative learning; deep learning; image classification; FACE RECOGNITION;
D O I
10.1109/TCSVT.2024.3371513
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Learning with noisy labels has become more and more popular because of the expensive costs of collecting high-quality labels. To avoid the decrease in model performance caused by incorrect annotations, some existing methods try to select reliable samples based on the local structure of nearest neighbors in the feature space. However, the information from local neighbors is unreliable when encountering extremely noisy cases, and selecting samples only using the feature space may result in clear noise accumulation. To this end, we propose a Dual-Space Collaborative Learning (DSCL) framework to boost classification accuracy by jointly using the complementarity information from both semantic and feature spaces. Specifically, a collaborative selection module is designed by constructing a set of global prototypes and high-confidence semantic predictions, which enhances the robustness of the sample selection process. Moreover, a collaborative regularization module is constructed by the bidirectional adjustment between the semantic and feature spaces, which effectively alleviates the noise accumulation issue caused by sample selection bias in a single space. By simultaneously utilizing the two modules, our method improves the accuracy of sample selection and mitigates the degradation caused by noisy labels. Extensive experimental results indicate the superior performance of DSCL compared with various baselines. The source codes of this paper are available at https://github.com/DarrenZZhang/DSCL
引用
收藏
页码:7190 / 7201
页数:12
相关论文
共 72 条
  • [1] Allen Kelsey R., 2019, PR MACH LEARN RES, V97
  • [2] Arazo E, 2019, PR MACH LEARN RES, V97
  • [3] Arpit D, 2017, PR MACH LEARN RES, V70
  • [4] Bai YB, 2021, ADV NEUR IN
  • [5] Bekker AJ, 2016, INT CONF ACOUST SPEE, P2682, DOI 10.1109/ICASSP.2016.7472164
  • [6] Berthelot D., 2019, arXiv
  • [7] Understanding Robustness of Transformers for Image Classification
    Bhojanapalli, Srinadh
    Chakrabarti, Ayan
    Glasner, Daniel
    Li, Daliang
    Unterthiner, Thomas
    Veit, Andreas
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 10211 - 10221
  • [8] Learning Student Networks via Feature Embedding
    Chen, Hanting
    Wang, Yunhe
    Xu, Chang
    Xu, Chao
    Tao, Dacheng
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (01) : 25 - 35
  • [9] Chen Pengfei, 2019, P MACHINE LEARNING R, V97
  • [10] Chen T, 2020, PR MACH LEARN RES, V119