Contrastive knowledge-augmented self-distillation approach for few-shot learning

被引:1
作者
Zhang, Lixu [1 ]
Shao, Mingwen [1 ]
Chen, Sijie [1 ]
Liu, Fukang [1 ]
机构
[1] China Univ Petr, Coll Comp Sci & Technol, Qingdao, Peoples R China
基金
中国国家自然科学基金;
关键词
deep learning; few-shot learning; knowledge distillation; contrast learning;
D O I
10.1117/1.JEI.32.5.053037
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Few-shot learning consists in training a classifier, which can be quickly adapted to new tasks with a few samples. To address the few-shot learning tasks, metric-based meta-learning methods explore the appropriate metrices to measure the similarity between the support and the query samples. However, existing methods ignore the similarity relationship between the labeled samples in the support set. Consequently, we propose a contrastive knowledge-augmented self-distillation approach to leverage the similarity relationship between a few labeled samples in the support set and allow the model to focus on more regions of images. Specifically, we calculate the classification probability of query images and each class prototype, respectively, and consider the classification probability of each class prototype as the teacher to guide the classification of query samples. Meanwhile, we design a contrast loss to bring the eigenvectors of the same class closer and push the eigenvectors of different classes further. In addition, a transformation function is implemented to allow the model to focus on more regions of the images, so as to obtain the key features. Extensive experiments are conducted on miniImageNet, tieredImageNet, and Caltech-UCSD birds 200, and the results show that our method can enhance metric-based meta-learning methods and outperforms the state-of-the-art methods.
引用
收藏
页数:14
相关论文
共 45 条
[1]  
Rusu AA, 2019, Arxiv, DOI arXiv:1807.05960
[2]   Memory Matching Networks for One-Shot Image Recognition [J].
Cai, Qi ;
Pan, Yingwei ;
Yao, Ting ;
Yan, Chenggang ;
Mei, Tao .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :4080-4088
[3]  
Chen DF, 2020, AAAI CONF ARTIF INTE, V34, P3430
[4]  
Chen T, 2020, PR MACH LEARN RES, V119
[5]  
Chen WY, 2020, Arxiv, DOI arXiv:1904.04232
[6]  
Finn C, 2017, PR MACH LEARN RES, V70
[7]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[8]  
Hinton G, 2015, Arxiv, DOI arXiv:1503.02531
[9]   Learning Lightweight Lane Detection CNNs by Self Attention Distillation [J].
Hou, Yuenan ;
Ma, Zheng ;
Liu, Chunxiao ;
Loy, Chen Change .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :1013-1021
[10]  
Huang ZH, 2017, Arxiv, DOI [arXiv:1707.01219, DOI 10.48550/ARXIV.1707.01219]