Autonomous perception and adaptive standardization for few-shot learning

被引:4
作者
Zhang, Yourun [1 ]
Gong, Maoguo [1 ]
Li, Jianzhao [1 ]
Feng, Kaiyuan [1 ]
Zhang, Mingyang [1 ]
机构
[1] Xidian Univ, Key Lab Collaborat Intelligence Syst, Minist Educ, 2 South TaiBai Rd, Xian 710071, Peoples R China
基金
中国国家自然科学基金;
关键词
Few-shot learning; Image classification; Deep learning; Feature extraction; RAT MODEL; NETWORK; ALIGNMENT;
D O I
10.1016/j.knosys.2023.110746
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Identifying unseen classes with limited labeled data for reference is a challenging task, which is also known as few-shot learning. Generally, a knowledge-rich model is more robust than a knowledge-poor model when facing novel situations, and an intuitive way to enrich knowledge is to find additional training data, but this is not compatible with the principle of few-shot learning which aims to reduce reliance on big data. In contrast, improving the utilization of existing data is a more attractive option. In this paper, we propose a batch perception distillation approach, which improves the utilization of existing data by guiding individual classification with the intermixed information across a batch. In addition to data utilization, obtaining robust feature representation is also a concern. Specifically, the widely adopted metric-based few-shot classification approach classifies unseen testing classes by comparing the extracted features of different novel samples, which requires that the extracted features can accurately represent the class-related clues of the input images. In this paper, we propose a salience perception attention that enables the model to focus more easily on key clues in images, which helps to reduce the interference of irrelevant factors during classification. To overcome the distribution gap between the training classes and the unseen testing classes, we propose a weighted centering post-processing that standardizes the testing data according to the similarity between the training and testing classes. By combining the three proposed components, our method achieves superior performance on four widely used few-shot image classification datasets.(c) 2023 Elsevier B.V. All rights reserved.
引用
收藏
页数:14
相关论文
共 92 条
  • [1] Multi-scale kronecker-product relation networks for few-shot learning
    Abdelaziz, Mounir
    Zhang, Zuping
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (05) : 6703 - 6722
  • [2] Class Incremental Learning With Few-Shots Based on Linear Programming for Hyperspectral Image Classification
    Bai, Jing
    Yuan, Anran
    Xiao, Zhu
    Zhou, Huaji
    Wang, Dingchen
    Jiang, Hongbo
    Jiao, Licheng
    [J]. IEEE TRANSACTIONS ON CYBERNETICS, 2022, 52 (06) : 5474 - 5485
  • [3] Bendou Yassir, 2022, arXiv
  • [4] Bertinetto L., 2018, P INT C LEARN REPR
  • [5] Hierarchical Graph Neural Networks for Few-Shot Learning
    Chen, Cen
    Li, Kenli
    Wei, Wei
    Zhou, Joey Tianyi
    Zeng, Zeng
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (01) : 240 - 252
  • [6] Chen W., 2019, P INT C LEARN REPR, P1
  • [7] Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning
    Chen, Yinbo
    Liu, Zhuang
    Xu, Huijuan
    Darrell, Trevor
    Wang, Xiaolong
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 9042 - 9051
  • [8] Image Deformation Meta-Networks for One-Shot Learning
    Chen, Zitian
    Fu, Yanwei
    Wang, Yu-Xiong
    Ma, Lin
    Liu, Wei
    Hebert, Martial
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 8672 - 8681
  • [9] Learning to Capture the Query Distribution for Few-Shot Learning
    Chi, Ziqiu
    Wang, Zhe
    Yang, Mengping
    Li, Dongdong
    Du, Wenli
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (07) : 4163 - 4173
  • [10] A Two-Stage Approach to Few-Shot Learning for Image Recognition
    Das, Debasmit
    Lee, C. S. George
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 3336 - 3350