Memory-Based Optimization Methods for Model-Agnostic Meta-Learning and Personalized Federated Learning

被引:0
作者
Wang, Bokun [1 ]
Yuan, Zhuoning [2 ]
Ying, Yiming [3 ]
Yang, Tianbao [1 ]
机构
[1] Department of Computer Science and Engineering, Texas A&M University, College Station,TX,77843, United States
[2] Department of Computer Science, The University of Iowa, Iowa City,IA,52242, United States
[3] Department of Mathematics and Statistics, University at Albany, Albany,NY,12222, United States
基金
美国国家科学基金会;
关键词
Adversarial machine learning - Contrastive Learning - Optimization algorithms;
D O I
暂无
中图分类号
学科分类号
摘要
In recent years, model-agnostic meta-learning (MAML) has become a popular research area. However, the stochastic optimization of MAML is still underdeveloped. Existing MAML algorithms rely on the episode idea by sampling a few tasks and data points to update the meta-model at each iteration. Nonetheless, these algorithms either fail to guarantee convergence with a constant mini-batch size or require processing a large number of tasks at every iteration, which is unsuitable for continual learning or cross-device federated learning where only a small number of tasks are available per iteration or per round. To address these issues, this paper proposes memory-based stochastic algorithms for MAML that converge with vanishing error. The proposed algorithms require sampling a constant number of tasks and data samples per iteration, making them suitable for the continual learning scenario. Moreover, we introduce a communication-efficient memory-based MAML algorithm for personalized federated learning in cross-device (with client sampling) and cross-silo (without client sampling) settings. Our theoretical analysis improves the optimization theory for MAML, and our empirical results corroborate our theoretical findings. Interested readers can access our code at https://github.com/bokun-wang/moml. ©2023 Bokun Wang, Zhuoning Yuan, Yiming Ying, Tianbao Yang.
引用
收藏
相关论文
empty
未找到相关数据