FedGAMMA: Federated Learning With Global Sharpness-Aware Minimization

被引:8
|
作者
Dai, Rong [1 ]
Yang, Xun [1 ]
Sun, Yan [2 ]
Shen, Li [3 ]
Tian, Xinmei [1 ]
Wang, Meng [4 ]
Zhang, Yongdong [1 ]
机构
[1] Univ Sci & Technol China, Sch Informat Sci & Technol, Hefei 230026, Peoples R China
[2] Univ Sydney, Sch Comp Sci, Sydney, NSW 2008, Australia
[3] JD Explore Acad, Beijing 100000, Peoples R China
[4] Hefei Univ Technol, Sch Comp Sci & Informat Engn, Hefei 230009, Peoples R China
基金
中国国家自然科学基金;
关键词
Training; Servers; Federated learning; Data models; Minimization; Degradation; Convergence; Client-drift; deep learning; distributed learning; federated learning (FL);
D O I
10.1109/TNNLS.2023.3304453
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated learning (FL) is a promising framework for privacy-preserving and distributed training with decentralized clients. However, there exists a large divergence between the collected local updates and the expected global update, which is known as the client drift and mainly caused by heterogeneous data distribution among clients, multiple local training steps, and partial client participation training. Most existing works tackle this challenge based on the empirical risk minimization (ERM) rule, while less attention has been paid to the relationship between the global loss landscape and the generalization ability. In this work, we propose FedGAMMA, a novel FL algorithm with Global sharpness-Aware MiniMizAtion to seek a global flat landscape with high performance. Specifically, in contrast to FedSAM which only seeks the local flatness and still suffers from performance degradation when facing the client-drift issue, we adopt a local varieties control technique to better align each client's local updates to alleviate the client drift and make each client heading toward the global flatness together. Finally, extensive experiments demonstrate that FedGAMMA can substantially outperform several existing FL baselines on various datasets, and it can well address the client-drift issue and simultaneously seek a smoother and flatter global landscape.
引用
收藏
页码:17479 / 17492
页数:14
相关论文
共 50 条
  • [41] Chronos: Accelerating Federated Learning With Resource Aware Training Volume Tuning at Network Edges
    Liu, Yutao
    Zhang, Xiaoning
    Zhao, Yangming
    He, Yexiao
    Yu, Shui
    Zhu, Kainan
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2023, 72 (03) : 3889 - 3903
  • [42] LEAF: A Federated Learning-Aware Privacy-Preserving Framework for Healthcare Ecosystem
    Patel, Nisarg P.
    Parekh, Raj
    Amin, Saad Ali
    Gupta, Rajesh
    Tanwar, Sudeep
    Kumar, Neeraj
    Iqbal, Rahat
    Sharma, Ravi
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2024, 21 (01): : 1129 - 1141
  • [43] Gradient Scheduling With Global Momentum for Asynchronous Federated Learning in Edge Environment
    Wang, Haozhao
    Li, Ruixuan
    Li, Chengjie
    Zhou, Pan
    Li, Yuhua
    Xu, Wenchao
    Guo, Song
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (19) : 18817 - 18828
  • [44] Energy-Aware Edge Association for Cluster-Based Personalized Federated Learning
    Li, Yixuan
    Qin, Xiaoqi
    Chen, Hao
    Han, Kaifeng
    Zhang, Ping
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2022, 71 (06) : 6756 - 6761
  • [45] Tailored Federated Learning With Adaptive Central Acceleration on Diversified Global Models
    Zhao, Lei
    Cai, Lin
    Lu, Wu-Sheng
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [46] Latency Minimization in Covert Communication-Enabled Federated Learning Network
    Nguyen Thi Thanh Van
    Nguyen Cong Luong
    Nguyen, Huy T.
    Feng Shaohan
    Niyato, Dusit
    Kim, Dong In
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2021, 70 (12) : 13447 - 13452
  • [47] Incentive-Aware Autonomous Client Participation in Federated Learning
    Hu, Miao
    Wu, Di
    Zhou, Yipeng
    Chen, Xu
    Chen, Min
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2022, 33 (10) : 2612 - 2627
  • [48] Channel and Gradient-Importance Aware Device Scheduling for Over-the-Air Federated Learning
    Sun, Yuchang
    Lin, Zehong
    Mao, Yuyi
    Jin, Shi
    Zhang, Jun
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2024, 23 (07) : 6905 - 6920
  • [49] A Robust and Privacy-Aware Federated Learning Framework for Non-Intrusive Load Monitoring
    Agarwal, Vidushi
    Ardakanian, Omid
    Pal, Sujata
    IEEE TRANSACTIONS ON SUSTAINABLE COMPUTING, 2024, 9 (05): : 766 - 777
  • [50] AsyncFedGAN: An Efficient and Staleness-Aware Asynchronous Federated Learning Framework for Generative Adversarial Networks
    Manu, Daniel
    Alazzwi, Abee
    Yao, Jingjing
    Lin, Youzuo
    Sun, Xiang
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2025, 36 (03) : 553 - 569