Efficient Variance Reduction for Meta-Learning

被引:0
|
作者
Yang, Hansi [1 ]
Kwok, James T. [1 ]
机构
[1] Hong Kong Univ Sci & Technol, Dept Comp Sci & Engn, Clear Water Bay, Hong Kong, Peoples R China
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Meta-learning tries to learn meta-knowledge from a large number of tasks. However, the stochastic meta-gradient can have large variance due to data sampling (from each task) and task sampling (from the whole task distribution), leading to slow convergence. In this paper, we propose a novel approach that integrates variance reduction with first-order meta-learning algorithms such as Reptile. It retains the bilevel formulation which better captures the structure of meta-learning, but does not require storing the vast number of task-specific parameters in general bilevel variance reduction methods. Theoretical results show that it has fast convergence rate due to variance reduction. Experiments on benchmark few-shot classification data sets demonstrate its effectiveness over state-of-the-art meta-learning algorithms with and without variance reduction.
引用
收藏
页数:26
相关论文
共 50 条
  • [21] Meta-features for meta-learning
    Rivolli, Adriano
    Garcia, Luís P.F.
    Soares, Carlos
    Vanschoren, Joaquin
    de Carvalho, André C.P.L.F.
    Knowledge-Based Systems, 2022, 240
  • [22] Meta-Modelling Meta-Learning
    Hartmann, Thomas
    Moawad, Assaad
    Schockaert, Cedric
    Fouquet, Francois
    Le Traon, Yves
    2019 ACM/IEEE 22ND INTERNATIONAL CONFERENCE ON MODEL DRIVEN ENGINEERING LANGUAGES AND SYSTEMS (MODELS 2019), 2019, : 300 - 305
  • [23] Learning Tensor Representations for Meta-Learning
    Deng, Samuel
    Guo, Yilin
    Hsu, Daniel
    Mandal, Debmalya
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151
  • [24] Meta-learning for fast incremental learning
    Oohira, T
    Yamauchi, K
    Omori, T
    ARTIFICAIL NEURAL NETWORKS AND NEURAL INFORMATION PROCESSING - ICAN/ICONIP 2003, 2003, 2714 : 157 - 164
  • [25] Learning to Propagate for Graph Meta-Learning
    Liu, Lu
    Zhou, Tianyi
    Long, Guodong
    Jiang, Jing
    Zhang, Chengqi
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [26] Subspace Learning for Effective Meta-Learning
    Jiang, Weisen
    Kwok, James T.
    Zhang, Yu
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022, : 10177 - 10194
  • [27] Bayesian Optimization for Developmental Robotics with Meta-Learning by Parameters Bounds Reduction
    Petit, Maxime
    Dellandrea, Emmanuel
    Chen, Liming
    10TH IEEE INTERNATIONAL CONFERENCE ON DEVELOPMENT AND LEARNING AND EPIGENETIC ROBOTICS (ICDL-EPIROB 2020), 2020,
  • [28] Communication-Efficient Personalized Federated Meta-Learning in Edge Networks
    Yu, Feng
    Lin, Hui
    Wang, Xiaoding
    Garg, Sahil
    Kaddoum, Georges
    Singh, Satinder
    Hassan, Mohammad Mehedi
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2023, 20 (02): : 1558 - 1571
  • [29] SAVME: Efficient Safety Validation for Autonomous Systems Using Meta-Learning
    Schlichting, Marc R.
    Boord, Nina V.
    Corso, Anthony L.
    Kochenderfer, Mykel J.
    2023 IEEE 26TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS, ITSC, 2023, : 2118 - 2124
  • [30] Efficient Wireless Traffic Prediction at the Edge: A Federated Meta-Learning Approach
    Zhang, Liang
    Zhang, Chuanting
    Shihada, Basem
    IEEE COMMUNICATIONS LETTERS, 2022, 26 (07) : 1573 - 1577