Few-Shot Learning Based on Self-Attention and Auto-Encoder

被引:0
|
作者
Ji, Zhong [1 ]
Chai, Xingliang [1 ]
机构
[1] School of Electrical and Information Engineering, Tianjin University, Tianjin,300072, China
基金
中国国家自然科学基金;
关键词
Vector spaces - Signal encoding - Machine learning - Classification (of information) - Learning systems;
D O I
暂无
中图分类号
学科分类号
摘要
The aim of few-shot learning(FSL) is to improve the generalization ability of a learning model, so that new categories can be classified with a small number of available samples. It leads to significant reduction in both annotation and model training cost. Most of the existing metric learning methods pay attention only to find an appropriate metric space rather than improving the discrimination of the feature vectors. It is important to get maximum information when the number of samples is very few. Based on the representation capacity of different feature maps, a channel-based self-attention method is proposed to improve the discrimination of various class samples by weighting the more important feature map with a big value. Apart from that, the conception of space prototype is proposed to explore the information easily obtained from the samples. Meanwhile, inspired by auto-encoder, a method leveraging the information from all the samples is proposed. In this method, the class of prototypes are modified or altered to improve their accuracy. As a parameter-free augmented feature extractor, the proposed self-attention method alleviates the over-fitting problem that widely exists in FSL. This proposed self-attention method is a generalized and compatible one with many existing FSL methods in improving the performance. Compared to prototypical networks, the results on two few-shot learning benchmarks miniImagenet and CUB of three classification settings are improved when the proposed methods are applied on it. Specifically, when the training set largely differs from the test set, the proposed method results in absolute performance improvement by 10.23% and relative improvement by 17.04%. © 2021, Editorial Board of Journal of Tianjin University(Science and Technology). All right reserved.
引用
收藏
页码:338 / 345
相关论文
共 50 条
  • [41] Object-Aware Attention in Few-Shot Learning
    Shen, Yeqing
    Mo, Lisha
    Ma, Huimin
    Hu, Tianyu
    Dong, Yuhan
    IMAGE AND GRAPHICS TECHNOLOGIES AND APPLICATIONS, IGTA 2021, 2021, 1480 : 95 - 108
  • [42] Incremental Few-Shot Learning with Attention Attractor Networks
    Ren, Mengye
    Liao, Renjie
    Fetaya, Ethan
    Zemel, Richard S.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [43] A fusion spatial attention approach for few-shot learning
    Song, Heda
    Deng, Bowen
    Pound, Michael
    Ozcan, Ender
    Triguero, Isaac
    INFORMATION FUSION, 2022, 81 : 187 - 202
  • [44] Efficient Defect Classification Using Few-Shot Image Generation and Self-Attention Fused Convolution Features
    Zhang, Yingjie
    Yang, Zhenwei
    Xu, Yue
    Ai, Yibo
    Zhang, Weidong
    APPLIED SCIENCES-BASEL, 2024, 14 (12):
  • [45] A Dense Multicross Self-Attention and Adaptive Gated Perceptual Unit Method for Few-Shot Semantic Segmentation
    Xiao F.
    Liu R.
    Zhu Y.
    Zhang H.
    Zhang J.
    Chen S.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (06): : 2493 - 2504
  • [46] Enhancing Few-Shot 3D Point Cloud Classification With Soft Interaction and Self-Attention
    Khan, Abdullah Aman
    Shao, Jie
    Shafiq, Sidra
    Zhu, Shuyuan
    Shen, Heng Tao
    IEEE TRANSACTIONS ON MULTIMEDIA, 2025, 27 : 1127 - 1141
  • [47] ConvNeXt-ECA: An Effective Encoder Network for Few-Shot Learning
    Lao, Cheng-Xue
    Chung Tsoi, Ah
    Bugiolacchi, Roberto
    IEEE ACCESS, 2024, 12 : 133648 - 133669
  • [48] Prior-knowledge and attention based meta-learning for few-shot learning
    Qin, Yunxiao
    Zhang, Weiguo
    Zhao, Chenxu
    Wang, Zezheng
    Zhu, Xiangyu
    Shi, Jingping
    Qi, Guojun
    Lei, Zhen
    KNOWLEDGE-BASED SYSTEMS, 2021, 213
  • [49] A non-linear non-intrusive reduced order model of fluid flow by auto-encoder and self-attention deep learning methods
    Fu, R.
    Xiao, D.
    Navon, I. M.
    Fang, F.
    Yang, L.
    Wang, C.
    Cheng, S.
    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, 2023, 124 (13) : 3087 - 3111
  • [50] Variational Auto-Encoder Combined with Knowledge Graph Zero-Shot Learning
    Zhang, Haitao
    Su, Lin
    Computer Engineering and Applications, 2023, 59 (01): : 236 - 243