SELF-ATTENTIVE NETWORKS FOR ONE-SHOT IMAGE RECOGNITION

被引:0
作者
Fang, Pin [1 ]
Wang, Yisen [2 ]
Luo, Yuan [1 ]
机构
[1] Shanghai Jiao Tong Univ, Dept Comp Sci & Engn, Shanghai, Peoples R China
[2] JD AI Res, Shanghai, Peoples R China
来源
2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME) | 2019年
关键词
One-shot Learning; Image Recognition; Self-attention;
D O I
10.1109/ICME.2019.00165
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Despite recent breakthroughs in applications of deep neural networks, learning from a limited number of examples remains a key challenge in academia and industry. A prototypical example is one-shot learning, in which we must make predictions when only given one sample of each unseen class. Most previous works either categorize instances by the pair similarities or use Long-Short Term Memory (LSTM) based method, however, they usually suffer from inter/intraclass variation problem or high computation cost. In this paper, we propose self-attentive networks to address the above issues in one-shot image recognition problem in which deep neural features and attention mechanism especially self-attention are employed. Experiments on Omniglot and Mini-Imagenet datasets show that our proposed self-attentive networks achieve the state-of-the-art accuracy on one-shot image recognition problem. In addition, owing to the low time complexity and parallelizable architecture, self-attentive networks require significantly less time to train and infer.
引用
收藏
页码:934 / 939
页数:6
相关论文
共 27 条
[1]  
[Anonymous], 2017, CORR
[2]  
[Anonymous], 2017, NIPS
[3]  
[Anonymous], 2003, IEEE INT C COMP VIS
[4]  
[Anonymous], 2016, EMNLP
[5]  
[Anonymous], 2015, arXiv: Learning
[6]  
[Anonymous], 2016, CORR
[7]  
[Anonymous], CVPR
[8]  
[Anonymous], 2017, ICLR
[9]  
[Anonymous], 2015, ICCV
[10]  
[Anonymous], P 34 ANN M COGN SCI