Prototype Completion for Few-Shot Learning

被引:9
作者
Zhang, Baoquan [1 ]
Li, Xutao [1 ]
Ye, Yunming [1 ]
Feng, Shanshan [1 ]
机构
[1] Harbin Inst Technol, Sch Comp Sci & Technol, Shenzhen 518055, Guangdong, Peoples R China
基金
中国国家自然科学基金;
关键词
Few-Shot learning; image classification; meta-learning; CLASSIFICATION;
D O I
10.1109/TPAMI.2023.3277881
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Few-shot learning (FSL) aims to recognize novel classes with few examples. Pre-training based methods effectively tackle the problem by pre-training a feature extractor and then fine-tuning it through the nearest centroid based meta-learning. However, results show that the fine-tuning step makes marginal improvements. In this paper, 1) we figure out the reason, i.e., in the pre-trained feature space, the base classes already form compact clusters while novel classes spread as groups with large variances, which implies that fine-tuning feature extractor is less meaningful; 2) instead of fine-tuning feature extractor, we focus on estimating more representative prototypes. Consequently, we propose a novel prototype completion based meta-learning framework. This framework first introduces primitive knowledge (i.e., class-level part or attribute annotations) and extracts representative features for seen attributes as priors. Second, a part/attribute transfer network is designed to learn to infer the representative features for unseen attributes as supplementary priors. Finally, a prototype completion network is devised to learn to complete prototypes with these priors. Moreover, to avoid the prototype completion error, we further develop a Gaussian based prototype fusion strategy that fuses the mean-based and completed prototypes by exploiting the unlabeled samples. At last, we also develop an economic prototype completion version for FSL, which does not need to collect primitive knowledge, for a fair comparison with existing FSL methods without external knowledge. Extensive experiments show that our method: i) obtains more accurate prototypes; ii) achieves superior performance on both inductive and transductive FSL settings.
引用
收藏
页码:12250 / 12268
页数:19
相关论文
共 50 条
  • [21] Few-Shot Learning With a Strong Teacher
    Ye, Han-Jia
    Ming, Lu
    Zhan, De-Chuan
    Chao, Wei-Lun
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (03) : 1425 - 1440
  • [22] Few-Shot Learning for Defence and Security
    Robinson, Todd
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS II, 2020, 11413
  • [23] Exploring Quantization in Few-Shot Learning
    Wang, Meiqi
    Xue, Ruixin
    Lin, Jun
    Wang, Zhongfeng
    2020 18TH IEEE INTERNATIONAL NEW CIRCUITS AND SYSTEMS CONFERENCE (NEWCAS'20), 2020, : 279 - 282
  • [24] Label smoothing and task-adaptive loss function based on prototype network for few-shot learning
    Gao, Farong
    Luo, Xingsheng
    Yang, Zhangyi
    Zhang, Qizhong
    NEURAL NETWORKS, 2022, 156 : 39 - 48
  • [25] Few-shot learning for ear recognition
    Zhang, Jie
    Yu, Wen
    Yang, Xudong
    Deng, Fang
    PROCEEDINGS OF 2019 INTERNATIONAL CONFERENCE ON IMAGE, VIDEO AND SIGNAL PROCESSING (IVSP 2019), 2019, : 50 - 54
  • [26] Few-Shot Classification with Contrastive Learning
    Yang, Zhanyuan
    Wang, Jinghua
    Zhu, Yingying
    COMPUTER VISION, ECCV 2022, PT XX, 2022, 13680 : 293 - 309
  • [27] Explore pretraining for few-shot learning
    Li, Yan
    Huang, Jinjie
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (2) : 4691 - 4702
  • [28] An Applicative Survey on Few-shot Learning
    Zhang J.
    Zhang X.
    Lv L.
    Di Y.
    Chen W.
    Recent Patents on Engineering, 2022, 16 (05) : 104 - 124
  • [29] Task-aware prototype refinement for improved few-shot learning
    Wei Zhang
    Xiaodong Gu
    Neural Computing and Applications, 2023, 35 : 17899 - 17913
  • [30] Flexible few-shot class-incremental learning with prototype container
    Xu, Xinlei
    Wang, Zhe
    Fu, Zhiling
    Guo, Wei
    Chi, Ziqiu
    Li, Dongdong
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (15) : 10875 - 10889