Enhancing Unsupervised Few-Shot Medical Image Classification with Weight-Enhanced Contrastive Learning

被引:0
作者
Liu, Huantong [1 ]
Zhong, Jingze [1 ]
机构
[1] Shanghai Univ, Shanghai, Peoples R China
来源
PROCEEDINGS OF INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, MACHINE LEARNING AND PATTERN RECOGNITION, IPMLP 2024 | 2024年
关键词
Medical Image Classification; Weights Augmentation; Contrastive Learning; Unsupervised Learning; Few-Shot Learning;
D O I
10.1145/3700906.3700921
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the realm of contemporary deep learning, the pivotal challenges confronting few-shot and unsupervised learning reside in harnessing scarce labeled samples alongside abundant unlabeled ones, particularly in the context of medical image classification. This paper introduces an innovative approach that seamlessly integrates dynamic clustering with weights augmentation, aimed at bolstering the performance of few-shot medical image classification. Dubbed Weight-Enhanced Contrastive Learning (WECL), our method ingeniously fuses contrastive representation learning with a dynamic memory module during unsupervised pre-training. This fusion facilitates efficient clustering and classification of diverse augmented renditions of the same image. Additionally, the weights augmentation tactic meticulously tunes the weights of both ResNet and teacher-student model branches, thereby mitigating sample bias and enhancing the pre-trained model's proficiency. Extensive experiments across multiple few-shot medical image classification datasets underscore the superiority of our WECL approach, outperforming current state-of-the-art baselines, and effectively addressing issues pertaining to data distribution disparities and sample scarcity.
引用
收藏
页码:91 / 98
页数:8
相关论文
共 26 条
[1]  
[Anonymous], 2014, Introductory Lectures on Convex Optimization: A Basic Course
[2]   Few-Shot Learning for Medical Image Classification [J].
Cai, Aihua ;
Hu, Wenxin ;
Zheng, Jun .
ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2020, PT I, 2020, 12396 :441-452
[3]  
Chen Ting, 2020, ICML
[4]  
Finn C, 2017, PR MACH LEARN RES, V70
[5]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[6]  
Hinton G., 2015, ARXIV150302531
[7]  
Kaddour J., 2022, Stop Wasting My Time! Saving Days of ImageNet and BERT Training with Latest Weight Averaging
[8]  
Khosla P, 2020, ADV NEUR IN, V33
[9]  
Koh P. W., 2021, INT C MACHINE LEARNI, P5637
[10]  
Li H., 2023, A Multi-level Distillation based Dense Passage Retrieval Model