Few-shot classification with Fork Attention Adapter

被引:0
作者
Sun, Jieqi [1 ]
Li, Jian [1 ]
机构
[1] Shaanxi Univ Sci & Technol, Sch Elect & Control Engn, Sch Math & Data Sci, Xian 710021, Peoples R China
关键词
Few-shot Classification; Meta-Learning; Attention mechanism; Dense feature similarity;
D O I
10.1016/j.patcog.2024.110805
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Few-shot learning aims to transfer the knowledge learned from seen categories to unseen categories with a few references. It is also an essential challenge to bridge the gap between humans and deep learning models in real- world applications. Despite extensive previous efforts to tackle this problem by finding an appropriate similarity function, we emphasize that most existing methods have merely considered a single low-resolution representation pair utilized in similarity calculations between support and query samples. Such representational limitations could induce the instability of category predictions. To better achieve metric learning stabilities, we present a novel method dubbed Fork Attention Adapter (FA-adapter), which can seamlessly establish the dense feature similarity with the newly generated nuanced features. The utility of the proposed method is more performant and efficient via the two-stage training phase. Extensive experiments demonstrate consistent and substantial accuracy gains on the fine-grained CUB, Aircraft, non-fine-grained mini-ImageNet, and tiered-ImageNet benchmarks. By comprehensively studying and visualizing the learned knowledge from different source domains, we further present an extension version termed FA-adapter++ ++ to boost the performance in fine-grained scenarios.
引用
收藏
页数:10
相关论文
共 50 条
  • [31] Semantic-Aligned Attention With Refining Feature Embedding for Few-Shot Image Classification
    Xu, Xianda
    Xu, Xing
    Shen, Fumin
    Li, Yujie
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (12) : 25458 - 25468
  • [32] Few-shot relation classification by context attention-based prototypical networks with BERT
    Bei Hui
    Liang Liu
    Jia Chen
    Xue Zhou
    Yuhui Nian
    EURASIP Journal on Wireless Communications and Networking, 2020
  • [33] Few-Shot Malware Classification via Attention-Based Transductive Learning Network
    Deng, Liting
    Yu, Chengli
    Wen, Hui
    Xin, Mingfeng
    Sun, Yue
    Sun, Limin
    Zhu, Hongsong
    MOBILE NETWORKS & APPLICATIONS, 2024, : 1690 - 1704
  • [34] Few-Shot Classification Model Compression via School Learning
    Yang, Sai
    Liu, Fan
    Chen, Delong
    Huang, Huaxi
    Zhou, Jun
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (12) : 12244 - 12257
  • [35] Few-shot relation classification by context attention-based prototypical networks with BERT
    Hui, Bei
    Liu, Liang
    Chen, Jia
    Zhou, Xue
    Nian, Yuhui
    EURASIP JOURNAL ON WIRELESS COMMUNICATIONS AND NETWORKING, 2020, 2020 (01)
  • [36] A fusion spatial attention approach for few-shot learning
    Song, Heda
    Deng, Bowen
    Pound, Michael
    Ozcan, Ender
    Triguero, Isaac
    INFORMATION FUSION, 2022, 81 : 187 - 202
  • [37] Attention Based Siamese Networks for Few-Shot Learning
    Wang, Junhua
    Zhu, Zijiang
    Li, Jianjun
    Li, Junshan
    PROCEEDINGS OF 2018 IEEE 9TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING AND SERVICE SCIENCE (ICSESS), 2018, : 551 - 554
  • [38] DUAL-ATTENTION NETWORK FOR FEW-SHOT SEGMENTATION
    Chen, Zhikui
    Wang, Han
    Zhang, Suhua
    Zhong, Fangming
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2210 - 2214
  • [39] Global and Local Attention Embedding Network for Few-Shot Fine-Grained Image Classification
    Hu, Jiayuan
    Own, Chung-Ming
    Tao, Wenyuan
    WEB AND BIG DATA, PT I, APWEB-WAIM 2020, 2020, 12317 : 740 - 747
  • [40] Task-wise attention guided part complementary learning for few-shot image classification
    Gong Cheng
    Ruimin Li
    Chunbo Lang
    Junwei Han
    Science China Information Sciences, 2021, 64