Prototype Completion for Few-Shot Learning

被引:9
|
作者
Zhang, Baoquan [1 ]
Li, Xutao [1 ]
Ye, Yunming [1 ]
Feng, Shanshan [1 ]
机构
[1] Harbin Inst Technol, Sch Comp Sci & Technol, Shenzhen 518055, Guangdong, Peoples R China
基金
中国国家自然科学基金;
关键词
Few-Shot learning; image classification; meta-learning; CLASSIFICATION;
D O I
10.1109/TPAMI.2023.3277881
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Few-shot learning (FSL) aims to recognize novel classes with few examples. Pre-training based methods effectively tackle the problem by pre-training a feature extractor and then fine-tuning it through the nearest centroid based meta-learning. However, results show that the fine-tuning step makes marginal improvements. In this paper, 1) we figure out the reason, i.e., in the pre-trained feature space, the base classes already form compact clusters while novel classes spread as groups with large variances, which implies that fine-tuning feature extractor is less meaningful; 2) instead of fine-tuning feature extractor, we focus on estimating more representative prototypes. Consequently, we propose a novel prototype completion based meta-learning framework. This framework first introduces primitive knowledge (i.e., class-level part or attribute annotations) and extracts representative features for seen attributes as priors. Second, a part/attribute transfer network is designed to learn to infer the representative features for unseen attributes as supplementary priors. Finally, a prototype completion network is devised to learn to complete prototypes with these priors. Moreover, to avoid the prototype completion error, we further develop a Gaussian based prototype fusion strategy that fuses the mean-based and completed prototypes by exploiting the unlabeled samples. At last, we also develop an economic prototype completion version for FSL, which does not need to collect primitive knowledge, for a fair comparison with existing FSL methods without external knowledge. Extensive experiments show that our method: i) obtains more accurate prototypes; ii) achieves superior performance on both inductive and transductive FSL settings.
引用
收藏
页码:12250 / 12268
页数:19
相关论文
共 50 条
  • [1] Prototype Reinforcement for Few-Shot Learning
    Xu, Liheng
    Xie, Qian
    Jiang, Baoqing
    Zhang, Jiashuo
    2020 CHINESE AUTOMATION CONGRESS (CAC 2020), 2020, : 4912 - 4916
  • [2] WPE: Weighted prototype estimation for few-shot learning
    Cao, Jiangzhong
    Yao, Zijie
    Yu, Lianggeng
    Ling, Bingo Wing-Kuen
    IMAGE AND VISION COMPUTING, 2023, 137
  • [3] Few-shot learning with representative global prototype
    Liu, Yukun
    Shi, Daming
    Lin, Hexiu
    NEURAL NETWORKS, 2024, 180
  • [4] Distributed few-shot learning with prototype distribution correction
    Fu, Zhiling
    Tang, Dongfang
    Ma, Pingchuan
    Wang, Zhe
    Gao, Wen
    APPLIED INTELLIGENCE, 2023, 53 (24) : 30552 - 30565
  • [5] Prototype Relationship Optimization Network for Few-Shot Learning
    Wang, Dengzhong
    Zhong, Yuan
    Ma, Yunfei
    Guo, Chunsheng
    IEEJ TRANSACTIONS ON ELECTRICAL AND ELECTRONIC ENGINEERING, 2025, 20 (03) : 405 - 414
  • [6] Distributed few-shot learning with prototype distribution correction
    Zhiling Fu
    Dongfang Tang
    Pingchuan Ma
    Zhe Wang
    Wen Gao
    Applied Intelligence, 2023, 53 : 30552 - 30565
  • [7] Prototype Bayesian Meta-Learning for Few-Shot Image Classification
    Fu, Meijun
    Wang, Xiaomin
    Wang, Jun
    Yi, Zhang
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, : 1 - 15
  • [8] Contrastive prototype learning with semantic patchmix for few-shot image classification
    Dong, Mengping
    Lei, Fei
    Li, Zhenbo
    Liu, Xue
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 142
  • [9] Meta-Learning based prototype-relation network for few-shot classification
    Liu, Xiaoqian
    Zhou, Fengyu
    Liu, Jin
    Jiang, Lianjie
    NEUROCOMPUTING, 2020, 383 : 224 - 234
  • [10] Survey on Few-shot Learning
    Zhao K.-L.
    Jin X.-L.
    Wang Y.-Z.
    Ruan Jian Xue Bao/Journal of Software, 2021, 32 (02): : 349 - 369