Few-Shot Directed Meta-Learning for Image Classification

被引:2
作者
Ouyang, Jihong
Duan, Ganghai
Liu, Siguang [1 ]
机构
[1] Jilin Univ, Coll Comp Sci & Technol, Fac Informat Sci, Changchun 130012, Peoples R China
基金
中国国家自然科学基金;
关键词
Few-shot learning; Meta-learning; target adaptation; image classification;
D O I
10.1142/S021800142251017X
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning from only few samples is a challenging problem and meta-learning is an effective approach to solve it. Meta-learning model aims to learn by training a large number of other samples. When encountering target task, the model can quickly adapt and obtain better performance with only few labeled samples. However, general meta-learning only provides a universal model that has certain generalization ability for all unknown tasks, which causes limited effects on specific target tasks. In this paper, we propose a Few-shot Directed Meta-learning (FSDML) model to specialize and solve the target task by using few labeled samples of the target task to direct the meta-learning process. FSDML divides model parameters into shared parameters and target adaptation parameters to store prior knowledge and determine the update direction. These two parts of the parameters are updated in different stages of training. We conduct experiments of image classification task on miniImageNet and Omniglot and the results show that FSDML has better performance.
引用
收藏
页数:18
相关论文
共 40 条
[1]  
Andrychowicz M, 2016, ADV NEUR IN, V29
[2]  
Benaim S., 2018, ARXIV
[3]  
Bourgin V., 2019, PROC CVPR IEEE
[4]  
Chen W. Y., 2019, PROC 2019 INT C LEAR
[5]   Image Deformation Meta-Networks for One-Shot Learning [J].
Chen, Zitian ;
Fu, Yanwei ;
Wang, Yu-Xiong ;
Ma, Lin ;
Liu, Wei ;
Hebert, Martial .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :8672-8681
[6]  
Fink M., 2005, Advances in neural information processing systems, P449
[7]  
Finn C., 2018, PROC 2018 INT C LEAR
[8]  
Finn C, 2019, PR MACH LEARN RES, V97
[9]  
Finn C, 2017, PR MACH LEARN RES, V70
[10]  
Graves A., 2014, arXiv