Meta-Learning with Implicit Gradients

被引:0
|
作者
Rajeswaran, Aravind [1 ]
Finn, Chelsea [2 ]
Kakade, Sham M. [1 ]
Levine, Sergey [2 ]
机构
[1] Univ Washington, Seattle, WA 98195 USA
[2] Univ Calif Berkeley, Berkeley, CA 94720 USA
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019) | 2019年 / 32卷
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A core capability of intelligent systems is the ability to quickly learn new tasks by drawing on prior experience. Gradient (or optimization) based meta-learning has recently emerged as an effective approach for few-shot learning. In this formulation, meta-parameters are learned in the outer loop, while task-specific models are learned in the inner-loop, by using only a small amount of data from the current task. A key challenge in scaling these approaches is the need to differentiate through the inner loop learning process, which can impose considerable computational and memory burdens. By drawing upon implicit differentiation, we develop the implicit MAML algorithm, which depends only on the solution to the inner level optimization and not the path taken by the inner loop optimizer. This effectively decouples the meta-gradient computation from the choice of inner loop optimizer. As a result, our approach is agnostic to the choice of inner loop optimizer and can gracefully handle many gradient steps without vanishing gradients or memory constraints. Theoretically, we prove that implicit MAML can compute accurate meta-gradients with a memory footprint no more than that which is required to compute a single inner loop gradient and at no overall increase in the total computational cost. Experimentally, we show that these benefits of implicit MAML translate into empirical gains on few-shot image recognition benchmarks.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Scalable Bayesian Meta-Learning through Generalized Implicit Gradients
    Zhang, Yilang
    Li, Bingcong
    Gao, Shijian
    Giannakis, Georgios B.
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 9, 2023, : 11298 - 11306
  • [2] Meta-learning with implicit gradients in a few-shot setting for medical image segmentation
    Khadka, Rabindra
    Jha, Debesh
    Hicks, Steven
    Thambawita, Vajira
    Riegler, Michael A.
    Ali, Sharib
    Halvorsen, Pal
    COMPUTERS IN BIOLOGY AND MEDICINE, 2022, 143
  • [3] Meta-learning Sparse Implicit Neural Representations
    Lee, Jaeho
    Tack, Jihoon
    Lee, Namhoon
    Shin, Jinwoo
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [4] Implicit Kernel Meta-Learning Using Kernel Integral Forms
    Falk, John Isak Texas
    Ciliberto, Carlo
    Pontil, Massimiliano
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, VOL 180, 2022, 180 : 652 - 662
  • [5] Stateless neural meta-learning using second-order gradients
    Huisman, Mike
    Plaat, Aske
    van Rijn, Jan N.
    MACHINE LEARNING, 2022, 111 (09) : 3227 - 3244
  • [6] Stateless neural meta-learning using second-order gradients
    Mike Huisman
    Aske Plaat
    Jan N. van Rijn
    Machine Learning, 2022, 111 : 3227 - 3244
  • [7] Meta-learning based instance manipulation for implicit discourse relation recognition
    Zeng, Jiali
    Xie, Binbin
    Wu, Changxing
    Yin, Yongjing
    Zeng, Hualin
    Su, Jinsong
    KNOWLEDGE-BASED SYSTEMS, 2023, 267
  • [8] Learning Meta-Learning (LML) dataset: Survey data of meta-learning parameters
    Corraya, Sonia
    Al Mamun, Shamim
    Kaiser, M. Shamim
    DATA IN BRIEF, 2023, 51
  • [9] PAC-Bayes Meta-Learning With Implicit Task-Specific Posteriors
    Nguyen, Cuong
    Do, Thanh-Toan
    Carneiro, Gustavo
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (01) : 841 - 851
  • [10] Meta-learning in Reinforcement Learning
    Schweighofer, N
    Doya, K
    NEURAL NETWORKS, 2003, 16 (01) : 5 - 9