Deep semi-supervised learning with contrastive learning and partial label propagation for image data

被引:18
作者
Gan, Yanglan [1 ]
Zhu, Huichun [1 ]
Guo, Wenjing [1 ]
Xu, Guangwei [1 ]
Zou, Guobing [2 ]
机构
[1] Donghua Univ, Sch Comp Sci & Technol, Shanghai 201620, Peoples R China
[2] Shanghai Univ, Sch Comp Engn & Sci, Shanghai 200444, Peoples R China
关键词
Semi-supervised learning; Contrastive learning; Partial label propagation; Data augmentation; NEURAL-NETWORKS; SMOTE;
D O I
10.1016/j.knosys.2022.108602
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep semi-supervised learning is becoming an active research topic because it jointly utilizes labeled and unlabeled samples in training deep neural networks. Recent advances are mainly focused on inductive semi-supervised learning which generally extends supervised algorithms to include unlabeled data. In this paper, we propose CL_PLP, a new transductive deep semi-supervised learning algorithm based on contrastive self-supervised learning and partial label propagation. The proposed method consists of two modules, contrastive self-supervised learning module extracting features from labeled and unlabeled data and partial label propagation module generating confident pseudo-labels through label propagation. For contrastive learning, we propose an improved twins network model by adding multiple projector layers and the contrastive loss term. Meanwhile, we adopt strong and weak data augmentation to increase the diversity of the dataset and the robustness of the model. For the partial label propagation module, we interrupt the label propagation process according to the quality of pseudo-labels and improve the impact of high-quality pseudo-labels. The performance of our algorithm on three standard baseline datasets CIFAR-10, CIFAR-100 and miniImageNet is better than previous state-of-the-art transductive deep semi-supervised learning methods. By transferring our model to the medical COVID19-Xray dataset, it also achieves good performance. Finally, we propose a strategy to integrate our partial label propagation module with inductive semi-supervised learning method, and the results prove that it can further improve their performance and obtain additional high-quality pseudo-labels for the unlabeled data.
引用
收藏
页数:11
相关论文
共 45 条
  • [21] Label Propagation for Deep Semi-supervised Learning
    Iscen, Ahmet
    Tolias, Giorgos
    Avrithis, Yannis
    Chum, Ondrej
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 5065 - 5074
  • [22] Self-Supervised Visual Feature Learning With Deep Neural Networks: A Survey
    Jing, Longlong
    Tian, Yingli
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (11) : 4037 - 4058
  • [23] KRIZHEVSKY A., 2009, Learning multiple layers of features from tiny images
  • [24] Laine S., 2017, P ICLR, P1
  • [25] Lee Dong-Hyun, 2013, WORKSHOP CHALLENGES
  • [26] Vehicular Edge Computing and Networking: A Survey
    Liu, Lei
    Chen, Chen
    Pei, Qingqi
    Maharjan, Sabita
    Zhang, Yan
    [J]. MOBILE NETWORKS & APPLICATIONS, 2021, 26 (03) : 1145 - 1168
  • [27] Miyato T, 2019, IEEE T PATTERN ANAL, V41, P1979, DOI 10.1109/TPAMI.2018.2858821
  • [28] Deep Co-Training for Semi-Supervised Image Recognition
    Qiao, Siyuan
    Shen, Wei
    Zhang, Zhishuai
    Wang, Bo
    Yuille, Alan
    [J]. COMPUTER VISION - ECCV 2018, PT 15, 2018, 11219 : 142 - 159
  • [29] SMOTE-RSB*: a hybrid preprocessing approach based on oversampling and undersampling for high imbalanced data-sets using SMOTE and rough sets theory
    Ramentol, Enislay
    Caballero, Yaile
    Bello, Rafael
    Herrera, Francisco
    [J]. KNOWLEDGE AND INFORMATION SYSTEMS, 2012, 33 (02) : 245 - 265
  • [30] Roberts M, 2020, ARXIV PREPRINT ARXIV