Stochastic Smoothed Gradient Descent Ascent for Federated Minimax Optimization

被引:0
|
作者
Shen, Wei [1 ]
Huang, Minhui [2 ]
Zhang, Jiawei [3 ]
Shen, Cong [1 ]
机构
[1] Univ Virginia, Charlottesville, VA 22904 USA
[2] Meta AI, New York, NY USA
[3] MIT, 77 Massachusetts Ave, Cambridge, MA 02139 USA
基金
美国国家科学基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, federated minimax optimization has attracted growing interest due to its extensive applications in various machine learning tasks. While Smoothed Alternative Gradient Descent Ascent (Smoothed-AGDA) has proved successful in centralized nonconvex minimax optimization, how and whether smoothing techniques could be helpful in a federated setting remains unexplored. In this paper, we propose a new algorithm termed Federated Stochastic Smoothed Gradient Descent Ascent (FESS-GDA), which utilizes the smoothing technique for federated minimax optimization. We prove that FESS-GDA can be uniformly applied to solve several classes of federated minimax problems and prove new or better analytical convergence results for these settings. We showcase the practical efficiency of FESS-GDA in practical federated learning tasks of training generative adversarial networks (GANs) and fair classification.
引用
收藏
页数:44
相关论文
共 50 条
  • [1] AdaGDA: Faster Adaptive Gradient Descent Ascent Methods for Minimax Optimization
    Huang, Feihu
    Wu, Xidong
    Hu, Zhengmian
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 206, 2023, 206
  • [2] Stochastic Recursive Gradient Descent Ascent for Stochastic Nonconvex-Strongly-Concave Minimax Problems
    Luo, Luo
    Ye, Haishan
    Huang, Zhichao
    Zhang, Tong
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [3] Universal Gradient Descent Ascent Method for Nonconvex-Nonconcave Minimax Optimization
    Zheng, Taoli
    Zhu, Linglingzhi
    So, Anthony Man-Cho
    Blanchet, Jose
    Li, Jiajin
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [4] Universal Gradient Descent Ascent Method for Nonconvex-Nonconcave Minimax Optimization
    Zheng, Taoli
    Zhu, Linglingzhi
    So, Anthony Man-Cho
    Blanchet, José
    Li, Jiajin
    Advances in Neural Information Processing Systems, 2023, 36 : 54075 - 54110
  • [5] Two-Timescale Gradient Descent Ascent Algorithms for Nonconvex Minimax Optimization
    Lin, Tianyi
    Jin, Chi
    Jordan, Michael I.
    Journal of Machine Learning Research, 2025, 26
  • [6] Two-Timescale Gradient Descent Ascent Algorithms for Nonconvex Minimax Optimization
    Lin, Tianyi
    Jin, Chi
    Jordan, Michael I.
    JOURNAL OF MACHINE LEARNING RESEARCH, 2025, 26
  • [7] Universal Gradient Descent Ascent Method for Nonconvex-Nonconcave Minimax Optimization
    Zheng, Taoli
    Zhu, Linglingzhi
    So, Anthony Man-Cho
    Blanchet, José
    Li, Jiajin
    arXiv, 2022,
  • [8] Gradient Descent Ascent for Minimax Problems on Riemannian Manifolds
    Huang, Feihu
    Gao, Shangqian
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (07) : 8466 - 8476
  • [9] Randomized Stochastic Gradient Descent Ascent
    Sebbouh, Othmane
    Cuturi, Marco
    Peyre, Gabriel
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151
  • [10] Near-optimal Local Convergence of Alternating Gradient Descent-Ascent for Minimax Optimization
    Zhang, Guodong
    Wang, Yuanhao
    Lessard, Laurent
    Grosse, Roger
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151