Learning with Noisy labels via Self-supervised Adversarial Noisy Masking

被引:11
作者
Tu, Yuanpeng [1 ]
Zhang, Boshen [2 ]
Li, Yuxi [2 ]
Liu, Liang [2 ]
Li, Jian [2 ]
Zhang, Jiangning [2 ]
Wang, Yabiao [2 ]
Wang, Chengjie [2 ,3 ]
Zhao, Cai Rong [1 ]
机构
[1] Tongji Univ, Dept Elect & Informat Engn, Shanghai, Peoples R China
[2] Tencent, YouTu Lab, Shanghai, Peoples R China
[3] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
来源
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2023年
关键词
CLASSIFICATION;
D O I
10.1109/CVPR52729.2023.01553
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Collecting large-scale datasets is crucial for training deep models, annotating the data, however, inevitably yields noisy labels, which poses challenges to deep learning algorithms. Previous efforts tend to mitigate this problem via identifying and removing noisy samples or correcting their labels according to the statistical properties (e.g., loss values) among training samples. In this paper, we aim to tackle this problem from a new perspective, delving into the deep feature maps, we empirically find that models trained with clean and mislabeled samples manifest distinguishable activation feature distributions. From this observation, a novel robust training approach termed adversarial noisy masking is proposed. The idea is to regularize deep features with a label quality guided masking scheme, which adaptively modulates the input data and label simultaneously, preventing the model to overfit noisy samples. Further, an auxiliary task is designed to reconstruct input data, it naturally provides noise-free self-supervised signals to reinforce the generalization ability of models. The proposed method is simple yet effective, it is tested on synthetic and real-world noisy datasets, where significant improvements are obtained over previous methods. Code is available at https://github.com/yuanpengtu/SANM.
引用
收藏
页码:16186 / 16195
页数:10
相关论文
共 48 条
  • [1] Arazo Eric, 2019, PROC INT C MACHINE L
  • [2] Arpit D, 2017, PR MACH LEARN RES, V70
  • [3] Chen Pengfei, 2019, PROC INT C MACHINE L
  • [4] Boosting Co-teaching with Compression Regularization for Label Noise
    Chen, Yingyi
    Shen, Xi
    Hu, Shell Xu
    Suykens, Johan A. K.
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 2682 - 2686
  • [5] Fernandez-Granda Carlos, 2020, NeurIPS, P20331
  • [6] Contrastive Learning Improves Model Robustness Under Label Noise
    Ghosh, Aritra
    Lan, Andrew
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 2697 - 2702
  • [7] Adversarial Mask Generation for Preserving Visual Privacy
    Gupta, Aayush
    Jaiswal, Ayush
    Wu, Yue
    Yadav, Vivek
    Natarajan, Pradeep
    [J]. 2021 16TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG 2021), 2021,
  • [8] Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels
    Han, Bo
    Yao, Quanming
    Yu, Xingrui
    Niu, Gang
    Xu, Miao
    Hu, Weihua
    Tsang, Ivor W.
    Sugiyama, Masashi
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [9] Deep Self-Learning From Noisy Labels
    Han, Jiangfan
    Luo, Ping
    Wang, Xiaogang
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 5137 - 5146
  • [10] Masked Autoencoders Are Scalable Vision Learners
    He, Kaiming
    Chen, Xinlei
    Xie, Saining
    Li, Yanghao
    Dollar, Piotr
    Girshick, Ross
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 15979 - 15988