Boosting semi-supervised learning with Contrastive Complementary Labeling

被引:3
作者
Deng, Qinyi [1 ]
Guo, Yong [1 ]
Yang, Zhibang [1 ]
Pan, Haolin [1 ]
Chen, Jian [1 ]
机构
[1] South China Univ Technol, Guangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
Semi-supervised learning; Contrastive learning; Complementary labels;
D O I
10.1016/j.neunet.2023.11.052
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Semi-supervised learning (SSL) approaches have achieved great success in leveraging a large amount of unlabeled data to learn deep models. Among them, one popular approach is pseudo-labeling which generates pseudo labels only for those unlabeled data with high-confidence predictions. As for the low-confidence ones, existing methods often simply discard them because these unreliable pseudo labels may mislead the model. Unlike existing methods, we highlight that these low-confidence data can be still beneficial to the training process. Specifically, although we cannot determine which class a low-confidence sample belongs to, we can assume that this sample should be very unlikely to belong to those classes with the lowest probabilities (often called complementary classes/labels). Inspired by this, we propose a novel Contrastive Complementary Labeling (CCL) method that constructs a large number of reliable negative pairs based on the complementary labels and adopts contrastive learning to make use of all the unlabeled data. Extensive experiments demonstrate that CCL significantly improves the performance on top of existing advanced methods and is particularly effective under the label-scarce settings. For example, CCL yields an improvement of 2.43% over FixMatch on CIFAR-10 only with 40 labeled data.
引用
收藏
页码:417 / 426
页数:10
相关论文
共 65 条
  • [1] Semi-Supervised Semantic Segmentation with Pixel-Level Contrastive Learning from a Class-wise Memory Bank
    Alonso, Inigo
    Sabater, Alberto
    Ferstl, David
    Montesano, Luis
    Murillo, Ana C.
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 8199 - 8208
  • [2] Athiwaratkun B., 2019, P INT C LEARN REPR
  • [3] Berthelot D., 2019, arXiv
  • [4] Berthelot D., 2022, P INT C LEARN REPR, P1
  • [5] Berthelot D, 2019, ADV NEUR IN, V32
  • [6] Emerging Properties in Self-Supervised Vision Transformers
    Caron, Mathilde
    Touvron, Hugo
    Misra, Ishan
    Jegou, Herve
    Mairal, Julien
    Bojanowski, Piotr
    Joulin, Armand
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 9630 - 9640
  • [7] Chen T, 2020, PR MACH LEARN RES, V119
  • [8] Exploring Simple Siamese Representation Learning
    Chen, Xinlei
    He, Kaiming
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 15745 - 15753
  • [9] An Empirical Study of Training Self-Supervised Vision Transformers
    Chen, Xinlei
    Xie, Saining
    He, Kaiming
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 9620 - 9629
  • [10] Chia-Wen Kuo, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12363), P479, DOI 10.1007/978-3-030-58523-5_28