Improving Generalization of Meta-learning with Inverted Regularization at Inner-level

被引:3
|
作者
Wang, Lianzhe [1 ]
Zhou, Shiji [1 ]
Zhang, Shanghang [2 ]
Chu, Xu [1 ]
Chang, Heng [1 ]
Zhu, Wenwu [1 ]
机构
[1] Tsinghua Univ, Beijing, Peoples R China
[2] Peking Univ, Natl Key Lab Multimedia Informat Proc, Beijing, Peoples R China
来源
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR | 2023年
基金
中国国家自然科学基金;
关键词
D O I
10.1109/CVPR52729.2023.00756
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Despite the broad interest in meta-learning, the generalization problem remains one of the significant challenges in this field. Existing works focus on meta-generalization to unseen tasks at the meta-level by regularizing the meta-loss, while ignoring that adapted models may not generalize to the task domains at the adaptation level. In this paper, we propose a new regularization mechanism for meta-learning - Minimax-Meta Regularization, which employs inverted regularization at the inner loop and ordinary regularization at the outer loop during training. In particular, the inner inverted regularization makes the adapted model more difficult to generalize to task domains; thus, optimizing the outer-loop loss forces the meta-model to learn meta-knowledge with better generalization. Theoretically, we prove that inverted regularization improves the meta-testing performance by reducing generalization errors. We conduct extensive experiments on the representative scenarios, and the results show that our method consistently improves the performance of meta-learning algorithms.
引用
收藏
页码:7826 / 7835
页数:10
相关论文
共 50 条
  • [41] Human-like systematic generalization through a meta-learning neural network
    Brenden M. Lake
    Marco Baroni
    Nature, 2023, 623 : 115 - 121
  • [42] Towards generalization on real domain for single image dehazing via meta-learning
    Ren, Wenqi
    Sun, Qiyu
    Zhao, Chaoqiang
    Tang, Yang
    CONTROL ENGINEERING PRACTICE, 2023, 133
  • [43] Generalization Bounds for Meta-Learning via PAC-Bayes and Uniform Stability
    Farid, Alec
    Majumdar, Anirudha
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [44] Is the Meta-Learning Idea Able to Improve the Generalization of Deep Neural Networks on the Standard Supervised Learning?
    Deng, Xiang
    Zhang, Zhongfei
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 150 - 157
  • [45] Personalized Hashtag Recommendation with User-level Meta-learning
    Tao, Hemeng
    Khan, Latifur
    Thuraisingham, Bhavani
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [46] Improving generalization performance of natural gradient learning using optimized regularization by NIC
    Park, H
    Murata, N
    Amari, S
    NEURAL COMPUTATION, 2004, 16 (02) : 355 - 382
  • [47] Task-level Relations Modelling for Graph Meta-learning
    Zhou, Yuchen
    Cao, Yanan
    Shang, Yanmin
    Zhou, Chuan
    Song, Chuancheng
    Shi, Fengzhao
    Li, Qian
    2022 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2022, : 813 - 822
  • [48] LSRML: A latent space regularization based meta-learning framework for MR image segmentation
    Zhang, Bo
    Tan, Yunpeng
    Wang, Hui
    Zhang, Zheng
    Zhou, Xiuzhuang
    Wu, Jingyun
    Mi, Yue
    Huang, Haiwen
    Wang, Wendong
    PATTERN RECOGNITION, 2022, 130
  • [49] LSRML: A latent space regularization based meta-learning framework for MR image segmentation
    Zhang, Bo
    Tan, Yunpeng
    Wang, Hui
    Zhang, Zheng
    Zhou, Xiuzhuang
    Wu, Jingyun
    Mi, Yue
    Huang, Haiwen
    Wang, Wendong
    PATTERN RECOGNITION, 2022, 130
  • [50] Improving the performance of weak supervision searches using transfer and meta-learning
    Beauchesne, Hugues
    Chen, Zong-En
    Chiang, Cheng-Wei
    JOURNAL OF HIGH ENERGY PHYSICS, 2024, 2024 (02)