Meta-Learning with Implicit Gradients

被引:0
|
作者
Rajeswaran, Aravind [1 ]
Finn, Chelsea [2 ]
Kakade, Sham M. [1 ]
Levine, Sergey [2 ]
机构
[1] Univ Washington, Seattle, WA 98195 USA
[2] Univ Calif Berkeley, Berkeley, CA 94720 USA
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019) | 2019年 / 32卷
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A core capability of intelligent systems is the ability to quickly learn new tasks by drawing on prior experience. Gradient (or optimization) based meta-learning has recently emerged as an effective approach for few-shot learning. In this formulation, meta-parameters are learned in the outer loop, while task-specific models are learned in the inner-loop, by using only a small amount of data from the current task. A key challenge in scaling these approaches is the need to differentiate through the inner loop learning process, which can impose considerable computational and memory burdens. By drawing upon implicit differentiation, we develop the implicit MAML algorithm, which depends only on the solution to the inner level optimization and not the path taken by the inner loop optimizer. This effectively decouples the meta-gradient computation from the choice of inner loop optimizer. As a result, our approach is agnostic to the choice of inner loop optimizer and can gracefully handle many gradient steps without vanishing gradients or memory constraints. Theoretically, we prove that implicit MAML can compute accurate meta-gradients with a memory footprint no more than that which is required to compute a single inner loop gradient and at no overall increase in the total computational cost. Experimentally, we show that these benefits of implicit MAML translate into empirical gains on few-shot image recognition benchmarks.
引用
收藏
页数:12
相关论文
共 50 条
  • [41] Data Augmentation for Meta-Learning
    Ni, Renkun
    Goldblum, Micah
    Sharaf, Amr
    Kong, Kezhi
    Goldstein, Tom
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [42] Meta-Learning Genetic Programming
    Meuth, Ryan J.
    GECCO-2010 COMPANION PUBLICATION: PROCEEDINGS OF THE 12TH ANNUAL GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE, 2010, : 2101 - 2102
  • [43] METALA:: A meta-learning architecture
    Botía, Juan A.
    Gómez-Skarmeta, Antonio F.
    Valdés, Mercedes
    Padilla, Antonio
    COMPUTATIONAL INTELLIGENCE: THEORY AND APPLICATIONS, PROCEEDINGS, 2001, 2206 : 688 - 698
  • [44] Domain Disentangled Meta-Learning
    Zhang, Xin
    Li, Yanhua
    Zhang, Ziming
    Zhang, Zhi-Li
    PROCEEDINGS OF THE 2023 SIAM INTERNATIONAL CONFERENCE ON DATA MINING, SDM, 2023, : 541 - 549
  • [45] Meta-Learning MCMC Proposals
    Wang, Tongzhou
    Wu, Yi
    Moore, David A.
    Russell, Stuart J.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [46] The Role of Deconfounding in Meta-learning
    Jiang, Yinjie
    Chen, Zhengyu
    Kuang, Kun
    Yuan, Luotian
    Ye, Xinhai
    Wang, Zhihua
    Wu, Fei
    Wei, Ying
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022, : 10161 - 10176
  • [47] AutoML and Meta-learning for Multimedia
    Zhu, Wenwu
    Wang, Xin
    Zhang, Wenpeng
    PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 2699 - 2700
  • [48] Comparing meta-learning algorithms
    Enembreck, Fabricio
    Avila, Braulio Coelho
    ADVANCES IN ARTIFICIAL INTELLIGENCE - IBERAMIA-SBIA 2006, PROCEEDINGS, 2006, 4140 : 289 - 298
  • [49] The Effect of Diversity in Meta-Learning
    Kumar, Ramnath
    Deleu, Tristan
    Bengio, Yoshua
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 7, 2023, : 8396 - 8404
  • [50] Online Structured Meta-learning
    Yao, Huaxiu
    Zhou, Yingbo
    Mahdavi, Mehrdad
    Li, Zhenhui
    Socher, Richard
    Xiong, Caiming
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33