SSL-DC: Improving Transductive Few-Shot Learning via Self-Supervised Learning and Distribution Calibration

被引:0
|
作者
Yang, Huayi [1 ,2 ]
Wang, Deqing [1 ,2 ]
Zhao, Zhengyang [1 ,2 ]
Wang, Xuying [3 ]
机构
[1] Beihang Univ, SKLSDE, Beijing, Peoples R China
[2] Beihang Univ, BDBC Lab, Beijing, Peoples R China
[3] Chinese Univ Hong Kong, Shenzhen, Peoples R China
关键词
D O I
10.1109/ICPR56361.2022.9956433
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Few-shot learning, aiming to distinguish unseen classes by training with few labeled samples, is still challenged by the overfitting problem. The transductive few-shot learning paradigm enables us to reduce overfitting by training a highly discriminative feature representation via self-supervised learning since the entire unlabeled samples are allowed to be accessed. In this paper, we propose a simple but efficient approach based on self-supervised pre-training and nearest class prototype search, which can obtain a significant improvement in the performance of transductive few-shot learning tasks without external samples. However, since the class prototype is obtained through limited support samples, it is easily affected by biased samples. Therefore, we propose to train a conditional generative adversarial network to estimate the distribution of features instead of assuming it follows Gaussian distribution as previous arts. Thus, we can generate features that are closed to real features from the estimated distribution to calibrate the distribution of the class prototype. Finally, more detailed experiments show that our method can exceed plenty of recent transductive few-shot learning methods significantly and achieve 9.83% and 4.38% improvements over the existing best method under the transductive 5-way 1-shot and 5-shot settings with ResNet-12 on the miniImageNet.
引用
收藏
页码:4892 / 4898
页数:7
相关论文
共 50 条
  • [1] Transductive distribution calibration for few-shot learning
    Li, Gang
    Zheng, Changwen
    Su, Bing
    Neurocomputing, 2022, 500 : 604 - 615
  • [2] Transductive distribution calibration for few-shot learning
    Li, Gang
    Zheng, Changwen
    Su, Bing
    NEUROCOMPUTING, 2022, 500 : 604 - 615
  • [3] SSL-ProtoNet: Self-supervised Learning Prototypical Networks for few-shot learning
    Lim, Jit Yan
    Lim, Kian Ming
    Lee, Chin Poo
    Tan, Yong Xuan
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 238
  • [4] Improving In-Context Few-Shot Learning via Self-Supervised Training
    Chen, Mingda
    Du, Jingfei
    Pasunuru, Ramakanth
    Mihaylov, Todor
    Iyer, Srini
    Stoyanov, Veselin
    Kozareva, Zornitsa
    NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 3558 - 3573
  • [5] Collaborative Self-Supervised Transductive Few-Shot Learning for Remote Sensing Scene Classification
    Han, Haiyan
    Huang, Yangchao
    Wang, Zhe
    ELECTRONICS, 2023, 12 (18)
  • [6] A robust transductive distribution calibration method for few-shot learning
    Li, Jingcong
    Ye, Chunjin
    Wang, Fei
    Pan, Jiahui
    PATTERN RECOGNITION, 2025, 163
  • [7] Unsupervised Few-Shot Feature Learning via Self-Supervised Training
    Ji, Zilong
    Zou, Xiaolong
    Huang, Tiejun
    Wu, Si
    FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, 2020, 14
  • [8] Reinforced Self-Supervised Training for Few-Shot Learning
    Yan, Zhichao
    An, Yuexuan
    Xue, Hui
    IEEE SIGNAL PROCESSING LETTERS, 2024, 31 : 731 - 735
  • [9] Conditional Self-Supervised Learning for Few-Shot Classification
    An, Yuexuan
    Xue, Hui
    Zhao, Xingyu
    Zhang, Lu
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 2140 - 2146
  • [10] Self-Supervised Few-Shot Learning on Point Clouds
    Sharma, Charu
    Kaul, Manohar
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33