Improving Generalization of Meta-learning with Inverted Regularization at Inner-level

被引:3
|
作者
Wang, Lianzhe [1 ]
Zhou, Shiji [1 ]
Zhang, Shanghang [2 ]
Chu, Xu [1 ]
Chang, Heng [1 ]
Zhu, Wenwu [1 ]
机构
[1] Tsinghua Univ, Beijing, Peoples R China
[2] Peking Univ, Natl Key Lab Multimedia Informat Proc, Beijing, Peoples R China
来源
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR | 2023年
基金
中国国家自然科学基金;
关键词
D O I
10.1109/CVPR52729.2023.00756
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Despite the broad interest in meta-learning, the generalization problem remains one of the significant challenges in this field. Existing works focus on meta-generalization to unseen tasks at the meta-level by regularizing the meta-loss, while ignoring that adapted models may not generalize to the task domains at the adaptation level. In this paper, we propose a new regularization mechanism for meta-learning - Minimax-Meta Regularization, which employs inverted regularization at the inner loop and ordinary regularization at the outer loop during training. In particular, the inner inverted regularization makes the adapted model more difficult to generalize to task domains; thus, optimizing the outer-loop loss forces the meta-model to learn meta-knowledge with better generalization. Theoretically, we prove that inverted regularization improves the meta-testing performance by reducing generalization errors. We conduct extensive experiments on the representative scenarios, and the results show that our method consistently improves the performance of meta-learning algorithms.
引用
收藏
页码:7826 / 7835
页数:10
相关论文
共 50 条
  • [1] Improving Generalization in Meta-learning via Task Augmentation
    Yao, Huaxiu
    Huang, Long-Kai
    Zhang, Linjun
    Wei, Ying
    Tian, Li
    Zou, James
    Huang, Junzhou
    Li, Zhenhui
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [2] Learning to Generalize: Meta-Learning for Domain Generalization
    Li, Da
    Yang, Yongxin
    Song, Yi-Zhe
    Hospedales, Timothy M.
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 3490 - 3497
  • [3] IMPROVING GENERALIZATION FOR FEW-SHOT REMOTE SENSING CLASSIFICATION WITH META-LEARNING
    Sharma, Surbhi
    Roscher, Ribana
    Riedel, Morris
    Memon, Shahbaz
    Cavallaro, Gabriele
    2022 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM (IGARSS 2022), 2022, : 5061 - 5064
  • [4] IMPROVING META-LEARNING GENERALIZATION WITH ACTIVATION-BASED EARLY-STOPPING
    Guiroy, Simon
    Pal, Christopher
    Mordido, Goncalo
    Chandar, Sarath
    CONFERENCE ON LIFELONG LEARNING AGENTS, VOL 199, 2022, 199
  • [5] Bi-level Meta-learning for Few-shot Domain Generalization
    Qin, Xiaorong
    Song, Xinhang
    Jiang, Shuqiang
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 15900 - 15910
  • [6] Meta-learning the invariant representation for domain generalization
    Jia, Chen
    Zhang, Yue
    MACHINE LEARNING, 2024, 113 (04) : 1661 - 1681
  • [7] Domain generalization through meta-learning: a survey
    Khoee, Arsham Gholamzadeh
    Yu, Yinan
    Feldt, Robert
    ARTIFICIAL INTELLIGENCE REVIEW, 2024, 57 (10)
  • [8] Meta-learning the invariant representation for domain generalization
    Chen Jia
    Yue Zhang
    Machine Learning, 2024, 113 : 1661 - 1681
  • [9] Meta-Learning for Domain Generalization in Semantic Parsing
    Wang, Bailin
    Lapata, Mirella
    Titov, Ivan
    2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 366 - 379
  • [10] Improving Generalization in Reinforcement Learning with Mixture Regularization
    Wang, Kaixin
    Kang, Bingyi
    Shao, Jie
    Feng, Jiashi
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33