Meta-Learning with a Geometry-Adaptive Preconditioner

被引:14
|
作者
Kang, Suhyun [1 ]
Hwang, Duhun [1 ]
Eo, Moonjung [1 ]
Kim, Taesup [2 ]
Rhee, Wonjong [1 ,3 ,4 ]
机构
[1] Seoul Natl Univ, Dept Intelligence & Informat, Seoul, South Korea
[2] Seoul Natl Univ, Grad Sch Data Sci, Seoul, South Korea
[3] Seoul Natl Univ, IPAI, Seoul, South Korea
[4] Seoul Natl Univ, AIIS, Seoul, South Korea
关键词
D O I
10.1109/CVPR52729.2023.01543
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Model-agnostic meta-learning (MAML) is one of the most successful meta-learning algorithms. It has a bi-level optimization structure where the outer-loop process learns a shared initialization and the inner-loop process optimizes task-specific weights. Although MAML relies on the standard gradient descent in the inner-loop, recent studies have shown that controlling the inner-loop's gradient descent with a meta-learned preconditioner can be beneficial. Existing preconditioners, however, cannot simultaneously adapt in a task-specific and path-dependent way. Additionally, they do not satisfy the Riemannian metric condition, which can enable the steepest descent learning with preconditioned gradient. In this study, we propose Geometry-Adaptive Pre-conditioned gradient descent (GAP) that can overcome the limitations in MAML; GAP can efficiently meta-learn a preconditioner that is dependent on task-specific parameters, and its preconditioner can be shown to be a Riemannian metric. Thanks to the two properties, the geometryadaptive preconditioner is effective for improving the innerloop optimization. Experiment results show that GAP outperforms the state-of-the-art MAML family and preconditioned gradient descent-MAML (PGD-MAML) family in a variety of few-shot learning tasks. Code is available at: https://github.com/ Suhyun777/CVPR23-GAP.
引用
收藏
页码:16080 / 16090
页数:11
相关论文
共 50 条
  • [1] Geometry-adaptive Meta-learning in Riemannian Manifolds
    Gao, Zhi
    PROCEEDINGS OF THE ACM TURING AWARD CELEBRATION CONFERENCE-CHINA 2024, ACM-TURC 2024, 2024, : 231 - 232
  • [2] Geometry-Adaptive Meta-Learning in Mixed-Curvature Spaces
    Gao, Zhi
    Wu, Yu-Wei
    Jia, Yun-De
    Jisuanji Xuebao/Chinese Journal of Computers, 2024, 47 (10): : 2289 - 2306
  • [3] Meta-Learning with Adaptive Hyperparameters
    Baik, Sungyong
    Choi, Myungsub
    Choi, Janghoon
    Kim, Heewon
    Lee, Kyoung Mu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [4] Adaptive Code Completion with Meta-learning
    Fang, Liyu
    Huang, Zhiqiu
    Zhou, Yu
    Chen, Taolue
    THE 12TH ASIA-PACIFIC SYMPOSIUM ON INTERNETWARE, INTERNETWARE 2020, 2021, : 116 - 125
  • [5] Meta-learning for Adaptive Image Segmentation
    Sellaouti, Aymen
    Jaafra, Yasmina
    Hamouda, Atef
    IMAGE ANALYSIS AND RECOGNITION, ICIAR 2014, PT I, 2014, 8814 : 187 - 197
  • [6] Meta-learning with an Adaptive Task Scheduler
    Yao, Huaxiu
    Wang, Yu
    Wei, Ying
    Zhao, Peilin
    Mahdavi, Mehrdad
    Lian, Defu
    Finn, Chelsea
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [7] Meta-learning for multiple detector geometry modeling
    Salamani, Dalila
    Zaborowska, Anna
    Pokorski, Witold
    20TH INTERNATIONAL WORKSHOP ON ADVANCED COMPUTING AND ANALYSIS TECHNIQUES IN PHYSICS RESEARCH, 2023, 2438
  • [8] Meta-AF: Meta-Learning for Adaptive Filters
    Casebeer, Jonah
    Bryan, Nicholas J.
    Smaragdis, Paris
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2023, 31 : 355 - 370
  • [9] Geometry-adaptive block partitioning for video coding
    Escoda, Oscar Divorra
    Yin, Peng
    Dai, Congxia
    Li, Xin
    2007 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL I, PTS 1-3, PROCEEDINGS, 2007, : 657 - +
  • [10] Visual Tracking by Adaptive Continual Meta-Learning
    Choi, Janghoon
    Baik, Sungyong
    Choi, Myungsub
    Kwon, Junseok
    Lee, Kyoung Mu
    IEEE ACCESS, 2022, 10 : 9022 - 9035