From Sample Poverty to Rich Feature Learning: A New Metric Learning Method for Few-Shot Classification

被引:3
作者
Zhang, Lei [1 ,2 ]
Lin, Yiting [1 ,2 ]
Yang, Xinyu [1 ,2 ]
Chen, Tingting [1 ]
Cheng, Xiyuan [1 ,3 ]
Cheng, Wenbin [1 ,2 ]
机构
[1] Univ Elect Sci & Technol China, Zhongshan Inst, Sch Comp, Zhongshan 528402, Peoples R China
[2] Univ Elect Sci & Technol China, Sch Informat & Commun Engn, Chengdu 611731, Peoples R China
[3] Guangdong Univ Technol, Sch Automat, Guangzhou 510006, Peoples R China
来源
IEEE ACCESS | 2024年 / 12卷
基金
中国国家自然科学基金;
关键词
Task analysis; Data models; Image classification; Training; Measurement; Few shot learning; Feature extraction; Few-Shot Learning; Image Classification; Metric Learning; NETWORKS;
D O I
10.1109/ACCESS.2024.3444483
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the rapid development of deep learning in the field of computer vision, few-shot learning has emerged as an effective approach to tackle the challenge of data scarcity, garnering widespread attention from researchers. Despite significant progress in few-shot learning, current few-shot image classification methods have not fully exploited the feature extraction capabilities of the backbone network and the existing labeled data. To address this issue, we introduces a few-shot image classification method based on metric learning, which aims to more fully explore and utilize these resources. During training, we employed the PatchUp technique to perform block-level operations in the hidden layer feature space, obtaining more diverse feature representations, thereby expanding the range of data representation and aiding in the formation of smoother classification decision boundaries. Additionally, the introduction of a self-supervised auxiliary loss helps the network to learn deep semantic information stably, thereby enhancing the classification performance for new categories. Furthermore, the centralization and normalization of features extracted by the backbone network significantly enhance the model's performance in few-shot image classification tasks. This paper conducted extensive experiments on multiple public datasets (including miniImageNet, tieredImageNet, FC-100, and CUB-200-2011), demonstrating the effectiveness and superior performance of the proposed method. The experimental results show that the method significantly improves the model's classification accuracy and generalization ability in the 1-shot learning scenario, providing strong support for further research and application in the field of few-shot learning.
引用
收藏
页码:124990 / 125002
页数:13
相关论文
共 52 条
  • [1] Matching Feature Sets for Few-Shot Image Classification
    Afrasiyabi, Arman
    Larochelle, Hugo
    Lalonde, Jean-Francois
    Gagne, Christian
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 9004 - 9014
  • [2] Bar Andreas, 2024, 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), P16046, DOI 10.1109/CVPR52733.2024.01519
  • [3] Easy-Ensemble Augmented-Shot-Y-Shaped Learning: State-of-the-Art Few-Shot Classification with Simple Components
    Bendou, Yassir
    Hu, Yuqing
    Lafargue, Raphael
    Lioi, Giulia
    Pasdeloup, Bastien
    Pateux, Stephane
    Gripon, Vincent
    [J]. JOURNAL OF IMAGING, 2022, 8 (07)
  • [4] Bin Liu, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12349), P438, DOI 10.1007/978-3-030-58548-8_26
  • [5] Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning
    Chen, Yinbo
    Liu, Zhuang
    Xu, Huijuan
    Darrell, Trevor
    Wang, Xiaolong
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 9042 - 9051
  • [6] MetaFSCEL A Meta-Learning Approach for Few-Shot Class Incremental Learning
    Chi, Zhixiang
    Gu, Li
    Liu, Huan
    Wang, Yang
    Yu, Yuanhao
    Tang, Jin
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 14146 - 14155
  • [7] Devlin J, 2019, Arxiv, DOI arXiv:1810.04805
  • [8] DeVries T, 2017, Arxiv, DOI [arXiv:1708.04552, DOI 10.48550/ARXIV.1708.04552]
  • [9] Dosovitskiy A, 2021, Arxiv, DOI arXiv:2010.11929
  • [10] Faramarzi M, 2022, AAAI CONF ARTIF INTE, P589