Cross-domain self-supervised few-shot learning via multiple crops with teacher-student network

被引:0
作者
Wang, Guangpeng [1 ]
Wang, Yongxiong [1 ]
Zhang, Jiapeng [1 ]
Wang, Xiaoming [1 ]
Pan, Zhiqun [1 ]
机构
[1] Univ Shanghai Sci & Technol, Sch Opt Elect & Comp Engn, Shanghai 200093, Peoples R China
基金
上海市自然科学基金;
关键词
Cross-domain; Few-shot learning; Image recognition; Self-supervised learning; Teacher network; Student network;
D O I
10.1016/j.engappai.2024.107892
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Most few-shot learning(FSL) methods rely on a pre-trained network on a large annotated base dataset with a feature distribution similar to that of the target domain. Conventional transfer learning and traditional few-shot learning methods are ineffective when there is a large gap between the source and target domain. We propose a simple teacher-student network solution to facilitate unlabeled images from the target domain to alleviate domain gap. We impose a self-supervised loss by calculating predictions from large crops of the unannotated samples of target domain using a teacher network and matching them with small crops of the same images from a student network. Furthermore, we design a novel contrastive loss for large crops to sufficiently utilize the self-supervised information of unlabeled images on target domain for the model training. The feature representation can be easily generalized to the target domain without the pretraining phase on target-specific classes. The accuracies of our model are 23.61 +/- 0.42, 33.87 +/- 0.59, 63.21 +/- 0.88, 74.36 +/- 0.88 on ChestX, ISIC, EuroSAT, and CropDisease datasets for the 1-shot scenario respectively. Extensive experiments show that the proposed method achieves competitive performance on the challenging cross-domain FSL image classification.
引用
收藏
页数:11
相关论文
共 71 条
  • [1] Pseudo-Labeling and Confirmation Bias in Deep Semi-Supervised Learning
    Arazo, Eric
    Ortego, Diego
    Albert, Paul
    O'Connor, Noel E.
    McGuinness, Kevin
    [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [2] Caron M, 2020, ADV NEUR IN, V33
  • [3] Emerging Properties in Self-Supervised Vision Transformers
    Caron, Mathilde
    Touvron, Hugo
    Misra, Ishan
    Jegou, Herve
    Mairal, Julien
    Bojanowski, Piotr
    Joulin, Armand
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 9630 - 9640
  • [4] Chen T, 2020, PR MACH LEARN RES, V119
  • [5] Chen W.Y., 2019, ICLR
  • [6] Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning
    Chen, Yinbo
    Liu, Zhuang
    Xu, Huijuan
    Darrell, Trevor
    Wang, Xiaolong
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 9042 - 9051
  • [7] Codella N, 2019, Arxiv, DOI [arXiv:1902.03368, DOI 10.48550/ARXIV.1902.03368]
  • [8] Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
  • [9] Dhillon G.S., 2020, ICLR
  • [10] Multi-task Self-Supervised Visual Learning
    Doersch, Carl
    Zisserman, Andrew
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 2070 - 2079