Few-Shot Learning Based on Self-Attention and Auto-Encoder

被引:0
|
作者
Ji, Zhong [1 ]
Chai, Xingliang [1 ]
机构
[1] School of Electrical and Information Engineering, Tianjin University, Tianjin,300072, China
基金
中国国家自然科学基金;
关键词
Vector spaces - Signal encoding - Machine learning - Classification (of information) - Learning systems;
D O I
暂无
中图分类号
学科分类号
摘要
The aim of few-shot learning(FSL) is to improve the generalization ability of a learning model, so that new categories can be classified with a small number of available samples. It leads to significant reduction in both annotation and model training cost. Most of the existing metric learning methods pay attention only to find an appropriate metric space rather than improving the discrimination of the feature vectors. It is important to get maximum information when the number of samples is very few. Based on the representation capacity of different feature maps, a channel-based self-attention method is proposed to improve the discrimination of various class samples by weighting the more important feature map with a big value. Apart from that, the conception of space prototype is proposed to explore the information easily obtained from the samples. Meanwhile, inspired by auto-encoder, a method leveraging the information from all the samples is proposed. In this method, the class of prototypes are modified or altered to improve their accuracy. As a parameter-free augmented feature extractor, the proposed self-attention method alleviates the over-fitting problem that widely exists in FSL. This proposed self-attention method is a generalized and compatible one with many existing FSL methods in improving the performance. Compared to prototypical networks, the results on two few-shot learning benchmarks miniImagenet and CUB of three classification settings are improved when the proposed methods are applied on it. Specifically, when the training set largely differs from the test set, the proposed method results in absolute performance improvement by 10.23% and relative improvement by 17.04%. © 2021, Editorial Board of Journal of Tianjin University(Science and Technology). All right reserved.
引用
收藏
页码:338 / 345
相关论文
共 50 条
  • [1] SELF-ATTENTION RELATION NETWORK FOR FEW-SHOT LEARNING
    Hui, Binyuan
    Zhu, Pengfei
    Hu, Qinghua
    Wang, Qilong
    2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO WORKSHOPS (ICMEW), 2019, : 198 - 203
  • [2] Few-shot learning based on prototype rectification with a self-attention mechanism
    Zhao, Peng
    Wang, Liang
    Zhao, Xuyang
    Liu, Huiting
    Ji, Xia
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 249
  • [3] Self-Attention Message Passing for Contrastive Few-Shot Learning
    Shirekar, Ojas Kishorkumar
    Singh, Anuj
    Jamali-Rad, Hadi
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 5415 - 5425
  • [4] Few-Shot Representation Learning for Knowledge Graph with Variational Auto-encoder Data Augmentation
    Wang, Ling
    Lu, Jicang
    Lu, Yinpeng
    Liu, Yan
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT XII, ICIC 2024, 2024, 14873 : 359 - 375
  • [5] SaberNet: Self-attention based effective relation network for few-shot learning
    Li, Zijun
    Hu, Zhengping
    Luo, Weiwei
    Hu, Xiao
    PATTERN RECOGNITION, 2023, 133
  • [6] SAPENet: Self-Attention based Prototype Enhancement Network for Few-shot Learning
    Huang, Xilang
    Choi, Seon Han
    PATTERN RECOGNITION, 2023, 135
  • [7] CTVAE: Current Task Variational Auto-Encoder with Dynamic Ensemble Loss for Few-Shot Learning
    Su, Hao-Chin
    Chen, Yi-Ling
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [8] Few-Shot Specific Emitter Identification Using Asymmetric Masked Auto-Encoder
    Yao, Zhisheng
    Fu, Xue
    Guo, Lantu
    Wang, Yu
    Lin, Yun
    Shi, Shengnan
    Gui, Guan
    IEEE COMMUNICATIONS LETTERS, 2023, 27 (10) : 2657 - 2661
  • [9] Self-Attention Metric Learning Based on Multiscale Feature Fusion for Few-Shot Fault Diagnosis
    Xie, Jingsong
    Liu, Jie
    Ding, Tianqi
    Wang, Tiantian
    Yu, Tianjian
    IEEE SENSORS JOURNAL, 2023, 23 (17) : 19771 - 19782
  • [10] Correction to: Self-attention network for few-shot learning based on nearest-neighbor algorithm
    Guangpeng Wang
    Yongxiong Wang
    Machine Vision and Applications, 2023, 34