Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks

被引:0
作者
Finn, Chelsea [1 ]
Abbeel, Pieter [1 ,2 ]
Levine, Sergey [1 ]
机构
[1] Univ Calif Berkeley, Berkeley, CA 94720 USA
[2] Open AI, San Francisco, CA USA
来源
INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70 | 2017年 / 70卷
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two fewshot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies.
引用
收藏
页数:10
相关论文
共 40 条
[1]  
Andrychowicz M, 2016, ADV NEUR IN, V29
[2]  
[Anonymous], 1990, CITESEER
[3]  
[Anonymous], 1998, LEARNING LEARN, DOI DOI 10.1007/978-1-4615-5529-2_8
[4]  
[Anonymous], 2016, RL2: Fast reinforcement learning via slow reinforcement learning
[5]  
[Anonymous], 2000, NEURAL NETWORKS
[6]  
Bengio S., 1992, PREPR C OPT ART BIOL, P6
[7]  
Donahue J, 2014, PR MACH LEARN RES, V32
[8]  
Duan Y, 2016, PR MACH LEARN RES, V48
[9]  
Edwards H., 2017, INT C LEARN REPR
[10]  
Girija S.S., 2016, TENSORFLOW LARGE SCA