ASCL: Accelerating semi-supervised learning via contrastive learning

被引:1
作者
Liu, Haixiong [1 ]
Li, Zuoyong [2 ]
Wu, Jiawei [3 ]
Zeng, Kun [2 ]
Hu, Rong [1 ,4 ]
Zeng, Wei [5 ]
机构
[1] Fujian Univ Technol, Sch Comp Sci & Math, Fujian Prov Key Lab Big Data Min & Applicat, Fuzhou 350118, Peoples R China
[2] Minjiang Univ, Sch Comp & Big Data, Fujian Prov Key Lab Informat Proc & Intelligent Co, Fuzhou 350121, Peoples R China
[3] Shenzhen Campus Sun Yat Sen Univ, Sch Intelligent Syst Engn, Shenzhen, Guangdong, Peoples R China
[4] Wuyi Univ, Key Lab Cognit Comp & Intelligent Informat Proc, Fujian Educ Inst, Wuyishan, Peoples R China
[5] Longyan Univ, Sch Phys & Mech & Elect Engn, Longyan, Peoples R China
基金
中国国家自然科学基金;
关键词
contrastive learning; image classification; semi-supervised learning; uncertainty estimation;
D O I
10.1002/cpe.8293
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
SSL (semi-supervised learning) is widely used in machine learning, which leverages labeled and unlabeled data to improve model performance. SSL aims to optimize class mutual information, but noisy pseudo-labels introduce false class information due to the scarcity of labels. Therefore, these algorithms often need significant training time to refine pseudo-labels for performance improvement iteratively. To tackle this challenge, we propose a novel plug-and-play method named Accelerating semi-supervised learning via contrastive learning (ASCL). This method combines contrastive learning with uncertainty-based selection for performance improvement and accelerates the convergence of SSL algorithms. Contrastive learning initially emphasizes the mutual information between samples as a means to decrease dependence on pseudo-labels. Subsequently, it gradually turns to maximizing the mutual information between classes, aligning with the objective of semi-supervised learning. Uncertainty-based selection provides a robust mechanism for acquiring pseudo-labels. The combination of the contrastive learning module and the uncertainty-based selection module forms a virtuous cycle to improve the performance of the proposed model. Extensive experiments demonstrate that ASCL outperforms state-of-the-art methods in terms of both convergence efficiency and performance. In the experimental scenario where only one label is assigned per class in the CIFAR-10 dataset, the application of ASCL to Pseudo-label, UDA (unsupervised data augmentation for consistency training), and Fixmatch benefits substantial improvements in classification accuracy. Specifically, the results demonstrate notable improvements in respect of 16.32%, 6.9%, and 24.43% when compared to the original outcomes. Moreover, the required training time is reduced by almost 50%.
引用
收藏
页数:15
相关论文
共 69 条
[1]  
Amini Alexander., 2020, Advances in Neural Information Processing Systems, V33, P14927
[2]  
Angelopoulos AN, 2022, PR MACH LEARN RES, P717
[3]  
Avd O., 2018, ARXIV
[4]  
Bachman P., 2014, Advances in Neural Information Processing Systems, V27, P3365
[5]   Semi-supervised medical image classification via distance correlation minimization and graph attention regularization [J].
Berenguer, Abel Diaz ;
Kvasnytsia, Maryna ;
Bossa, Matias Nicolas ;
Mukherjee, Tanmoy ;
Deligiannis, Nikos ;
Sahli, Hichem .
MEDICAL IMAGE ANALYSIS, 2024, 94
[6]  
Berthelot D., 2019, Remixmatch: Semi-supervised learning with distribution alignment and augmentation anchoring
[7]  
Berthelot D, 2019, ADV NEUR IN, V32
[8]   A Multicloud-Model-Based Many-Objective Intelligent Algorithm for Efficient Task Scheduling in Internet of Things [J].
Cai, Xingjuan ;
Geng, Shaojin ;
Wu, Di ;
Cai, Jianghui ;
Chen, Jinjun .
IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (12) :9645-9653
[9]  
Chen T., 2020, INT C MACH LEARN PML, P1597
[10]   Not-so-supervised: A survey of semi-supervised, multi-instance, and transfer learning in medical image analysis [J].
Cheplygina, Veronika ;
de Bruijne, Marleen ;
Pluim, Josien P. W. .
MEDICAL IMAGE ANALYSIS, 2019, 54 :280-296