Memory Matching Networks for One-Shot Image Recognition

被引:205
作者
Cai, Qi [1 ]
Pan, Yingwei [1 ]
Yao, Ting [2 ]
Yan, Chenggang [3 ]
Mei, Tao [2 ]
机构
[1] Univ Sci & Technol China, Hefei, Anhui, Peoples R China
[2] Microsoft Res, Beijing, Peoples R China
[3] Hangzhou Dianzi Univ, Hangzhou, Zhejiang, Peoples R China
来源
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2018年
关键词
D O I
10.1109/CVPR.2018.00429
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we introduce the new ideas of augmenting Convolutional Neural Networks (CNNs) with Memory and learning to learn the network parameters for the unlabelled images on the fly in one-shot learning. Specifically, we present Memory Matching Networks (MM-Net) a novel deep architecture that explores the training procedure, following the philosophy that training and test conditions must match. Technically, MM-Net writes the features of a set of labelled images (support set) into memory and reads from memory when performing inference to holistically leverage the knowledge in the set. Meanwhile, a Contextual Learner employs the memory slots in a sequential manner to predict the parameters of CNNs for unlabelled images. The whole architecture is trained by once showing only a few examples per class and switching the learning from mini batch to minibatch, which is tailored for one-shot learning when presented with a few examples of new categories at test time. Unlike the conventional one-shot learning approaches, our MM-Net could output one unified model irrespective of the number of shots and categories. Extensive experiments are conducted on two public datasets, i.e., Omniglot and minilmageNet, and superior results are reported when compared to state-of-the-art approaches. More remarkably, our MM-Net improves one-shot accuracy on Omniglot from 98.95% to 99.28% and from 49.21% to 53.37% on minilmageNet.
引用
收藏
页码:4080 / 4088
页数:9
相关论文
共 35 条
[1]  
[Anonymous], 1997, IEEE T SIGNAL PROCES
[2]  
[Anonymous], 2013, NIPS
[3]  
[Anonymous], 2017, ICCV
[4]  
[Anonymous], 2003, ICCV
[5]  
[Anonymous], 2016, ECCV
[6]  
[Anonymous], 2016, EMNLP
[7]  
[Anonymous], 2017, ICCV
[8]  
[Anonymous], 2015, ICCV
[9]  
[Anonymous], 2015, SCIENCE
[10]  
[Anonymous], ASRU