Semantics-driven attentive few-shot learning over clean and noisy samples

被引:3
作者
Baran, Orhun Bugra [1 ]
Cinbis, Ramazan Gokberk [1 ]
机构
[1] Middle East Tech Univ, Dept Comp Engn, TR-06800 Ankara, Turkey
关键词
Few -shot learning; Vision and language integration;
D O I
10.1016/j.neucom.2022.09.121
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Over the last couple of years, few-shot learning (FSL) has attracted significant attention towards minimiz-ing the dependency on labeled training examples. An inherent difficulty in FSL is handling ambiguities resulting from having too few training samples per class. To tackle this fundamental challenge in FSL, we aim to train meta-learner models that can leverage prior semantic knowledge about novel classes to guide the classifier synthesis process. In particular, we propose semantically-conditioned feature attention and sample attention mechanisms that estimate the importance of representation dimensions and training instances. We also study the problem of sample noise in FSL, towards utilizing meta-learners in more realistic and imperfect settings. Our experimental results demonstrate the effectiveness of the proposed semantic FSL model with and without sample noise.(c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页码:59 / 69
页数:11
相关论文
共 70 条
[1]  
[Anonymous], 2024, PROC ANN M COGNITIVE
[2]  
Antoniou A, 2018, Arxiv, DOI arXiv:1711.04340
[3]  
Bart E, 2005, PROC CVPR IEEE, P672
[4]  
Bronskill John, 2020, PMLR, P1153
[5]   Zero-Shot Visual Recognition using Semantics-Preserving Adversarial Embedding Networks [J].
Chen, Long ;
Zhang, Hanwang ;
Xiao, Jun ;
Liu, Wei ;
Chang, Shih-Fu .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :1043-1052
[6]  
Chen YB, 2021, Arxiv, DOI arXiv:2003.04390
[7]  
Chen ZT, 2019, AAAI CONF ARTIF INTE, P3379
[8]   Multi-Level Semantic Feature Augmentation for One-Shot Learning [J].
Chen, Zitian ;
Fu, Yanwei ;
Zhang, Yinda ;
Jiang, Yu-Gang ;
Xue, Xiangyang ;
Sigal, Leonid .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (09) :4594-4605
[9]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[10]  
DeVries T, 2017, Arxiv, DOI arXiv:1702.05538