COMBINING SELF-SUPERVISED AND SUPERVISED LEARNING WITH NOISY LABELS

被引:0
|
作者
Zhang, Yongqi [1 ]
Zhang, Hui [1 ]
Yao, Quanming [2 ]
Wan, Jun [3 ]
机构
[1] 4Paradigm Inc, Beijing, Peoples R China
[2] Tsinghua Univ, Dept Elect Engn, Beijing, Peoples R China
[3] Chinese Acad Sci, Inst Automat, Beijing, Peoples R China
来源
2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP | 2023年
关键词
Convolutional neural network; noisy label learning; self-supervised learning; robustness;
D O I
10.1109/ICIP49359.2023.10221957
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Since convolutional neural networks (CNNs) can easily overfit noisy labels, which are ubiquitous in visual classification tasks, it has been a great challenge to train CNNs against them robustly. Various methods have been proposed for this challenge. However, none of them pay attention to the difference between representation and classifier learning of CNNs. Thus, inspired by the observation that classifier is more robust to noisy labels while representation is much more fragile, and by the recent advances of self-supervised representation learning (SSRL) technologies, we design a new method, i.e., (CSNL)-N-3, to obtain representation by SSRL without labels and train the classifier directly with noisy labels. Extensive experiments are performed on both synthetic and real benchmark datasets. Results demonstrate that the proposed method can beat the state-of-the-art ones by a large margin, especially under a high noisy level.
引用
收藏
页码:605 / 609
页数:5
相关论文
共 50 条
  • [1] Self-supervised robust Graph Neural Networks against noisy graphs and noisy labels
    Yuan, Jinliang
    Yu, Hualei
    Cao, Meng
    Song, Jianqing
    Xie, Junyuan
    Wang, Chongjun
    APPLIED INTELLIGENCE, 2023, 53 (21) : 25154 - 25170
  • [2] Self-supervised robust Graph Neural Networks against noisy graphs and noisy labels
    Jinliang Yuan
    Hualei Yu
    Meng Cao
    Jianqing Song
    Junyuan Xie
    Chongjun Wang
    Applied Intelligence, 2023, 53 : 25154 - 25170
  • [3] Combining Self-supervised Learning and Active Learning for Disfluency Detection
    Wang, Shaolei
    Wang, Zhongyuan
    Che, Wanxiang
    Zhao, Sendong
    Liu, Ting
    ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING, 2022, 21 (03)
  • [4] Gated Self-supervised Learning for Improving Supervised Learning
    Fuadi, Erland Hillman
    Ruslim, Aristo Renaldo
    Wardhana, Putu Wahyu Kusuma
    Yudistira, Novanto
    2024 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI 2024, 2024, : 611 - 615
  • [5] A New Self-supervised Method for Supervised Learning
    Yang, Yuhang
    Ding, Zilin
    Cheng, Xuan
    Wang, Xiaomin
    Liu, Ming
    INTERNATIONAL CONFERENCE ON COMPUTER VISION, APPLICATION, AND DESIGN (CVAD 2021), 2021, 12155
  • [6] Perceptive Self-Supervised Learning Network for Noisy Image Watermark Removal
    Tian, Chunwei
    Zheng, Menghua
    Li, Bo
    Zhang, Yanning
    Zhang, Shichao
    Zhang, David
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (08) : 7069 - 7079
  • [7] Deep Self-Supervised Learning of Speech Denoising from Noisy Speeches
    Sanada, Yutaro
    Nakagawa, Takumi
    Wada, Yuichiro
    Takanashi, Kosaku
    Zhang, Yuhui
    Tokuyama, Kiichi
    Kanamori, Takafumi
    Yamada, Tomonori
    INTERSPEECH 2022, 2022, : 1178 - 1182
  • [8] Self-Supervised Learning in the Twilight of Noisy Real-World Datasets
    Tendle, Atharva
    Little, Andrew
    Scott, Stephen
    Hasan, Mohammad Rashedul
    2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, : 461 - 464
  • [9] Self-Supervised Learning for Recommendation
    Huang, Chao
    Xia, Lianghao
    Wang, Xiang
    He, Xiangnan
    Yin, Dawei
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 5136 - 5139
  • [10] Longitudinal self-supervised learning
    Zhao, Qingyu
    Liu, Zixuan
    Adeli, Ehsan
    Pohl, Kilian M.
    MEDICAL IMAGE ANALYSIS, 2021, 71