SAPENet: Self-Attention based Prototype Enhancement Network for Few-shot Learning

被引:35
|
作者
Huang, Xilang [1 ]
Choi, Seon Han [2 ,3 ]
机构
[1] Pukyong Natl Univ, Dept Artificial Intelligent Convergence, Pusan 48513, South Korea
[2] Ewha Womans Univ, Dept Elect & Elect Engn, Seoul 03760, South Korea
[3] Ewha Womans Univ, Grad Program Smart Factory, Seoul 03760, South Korea
基金
新加坡国家研究基金会;
关键词
Few -shot learning; Multi -head self -attention mechanism; Image classification; k -Nearest neighbor;
D O I
10.1016/j.patcog.2022.109170
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Few-shot learning considers the problem of learning unseen categories given only a few labeled samples. As one of the most popular few-shot learning approaches, Prototypical Networks have received considerable attention owing to their simplicity and efficiency. However, a class prototype is typically obtained by averaging a few labeled samples belonging to the same class, which treats the samples as equally important and is thus prone to learning redundant features. Herein, we propose a self-attention based prototype enhancement network (SAPENet) to obtain a more representative prototype for each class. SAPENet utilizes multi-head self-attention mechanisms to selectively augment discriminative features in each sample feature map, and generates channel attention maps between intra-class sample features to attentively retain informative channel features for that class. The augmented feature maps and attention maps are finally fused to obtain representative class prototypes. Thereafter, a local descriptor-based metric module is employed to fully exploit the channel information of the prototypes by searching k similar local descriptors of the prototype for each local descriptor in the unlabeled samples for classification. We performed experiments on multiple benchmark datasets: miniImageNet, tieredImageNet, and CUB-200-2011. The experimental results on these datasets show that SAPENet achieves a considerable improvement compared to Prototypical Networks and also outperforms related state-of-the-art methods.(c) 2022 Elsevier Ltd. All rights reserved.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Few-shot learning based on prototype rectification with a self-attention mechanism
    Zhao, Peng
    Wang, Liang
    Zhao, Xuyang
    Liu, Huiting
    Ji, Xia
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 249
  • [2] SELF-ATTENTION RELATION NETWORK FOR FEW-SHOT LEARNING
    Hui, Binyuan
    Zhu, Pengfei
    Hu, Qinghua
    Wang, Qilong
    2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO WORKSHOPS (ICMEW), 2019, : 198 - 203
  • [3] SaberNet: Self-attention based effective relation network for few-shot learning
    Li, Zijun
    Hu, Zhengping
    Luo, Weiwei
    Hu, Xiao
    PATTERN RECOGNITION, 2023, 133
  • [4] Few-Shot Learning Based on Self-Attention and Auto-Encoder
    Ji, Zhong
    Chai, Xingliang
    Tianjin Daxue Xuebao (Ziran Kexue yu Gongcheng Jishu Ban)/Journal of Tianjin University Science and Technology, 2021, 54 (04): : 338 - 345
  • [5] Correction to: Self-attention network for few-shot learning based on nearest-neighbor algorithm
    Guangpeng Wang
    Yongxiong Wang
    Machine Vision and Applications, 2023, 34
  • [6] Self-Attention Message Passing for Contrastive Few-Shot Learning
    Shirekar, Ojas Kishorkumar
    Singh, Anuj
    Jamali-Rad, Hadi
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 5415 - 5425
  • [7] Self-Attention Metric Learning Based on Multiscale Feature Fusion for Few-Shot Fault Diagnosis
    Xie, Jingsong
    Liu, Jie
    Ding, Tianqi
    Wang, Tiantian
    Yu, Tianjian
    IEEE SENSORS JOURNAL, 2023, 23 (17) : 19771 - 19782
  • [8] Self-attention network for few-shot learning based on nearest-neighbor algorithm (vol 34, 28, 2023)
    Wang, Guangpeng
    Wang, Yongxiong
    MACHINE VISION AND APPLICATIONS, 2023, 34 (02)
  • [9] Spatial Attention Network for Few-Shot Learning
    He, Xianhao
    Qiao, Peng
    Dou, Yong
    Niu, Xin
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2019: DEEP LEARNING, PT II, 2019, 11728 : 567 - 578
  • [10] Attention Relational Network for Few-Shot Learning
    Shuai, Jia
    Chen, JiaMing
    Yang, Meng
    INTELLIGENCE SCIENCE AND BIG DATA ENGINEERING: BIG DATA AND MACHINE LEARNING, PT II, 2019, 11936 : 163 - 174