Client Selection With Staleness Compensation in Asynchronous Federated Learning

被引:10
作者
Zhu, Hongbin [1 ]
Kuang, Junqian [2 ]
Yang, Miao [3 ]
Qian, Hua [2 ]
机构
[1] Fudan Univ, Inst FinTech, Shanghai 200082, Peoples R China
[2] Chinese Acad Sci, Shanghai Adv Res Inst, Shanghai 201210, Peoples R China
[3] ShanghaiTech Univ, Sch Informat Sci & Technol SIST, Shanghai 201210, Peoples R China
基金
中国国家自然科学基金;
关键词
Asynchronous federated learning; staleness compensation; client selection; multi-armed bandit;
D O I
10.1109/TVT.2022.3220809
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
As a nascent privacy-preserving machine learning (ML) paradigm, federated learning (FL) leverages distributed clients at the network edge to collaboratively train an ML model. Asynchronous FL overcomes the straggler issue in synchronous FL. However, asynchronous FL incurs the staleness problem, which degrades the training performance of FL over wireless networks. To tackle the staleness problem, we develop a staleness compensation algorithm to improve the training performance of FL in terms of convergence and test accuracy. By including the first-order term in Taylor expansion of the gradient function, the proposed algorithm compensates the staleness in asynchronous FL. To further minimize training latency, we model the client selection for asynchronous FL as a multi-armed bandit problem. We develop an online client selection algorithm to minimize training latency without prior knowledge of the channel condition or local computing status. Simulation results show that the proposed algorithm outperforms the baseline algorithms in both test accuracy and training latency.
引用
收藏
页码:4124 / 4129
页数:6
相关论文
共 18 条
[1]   Asynchronous Online Federated Learning for Edge Devices with Non-IID Data [J].
Chen, Yujing ;
Ning, Yue ;
Slawski, Martin ;
Rangwala, Huzefa .
2020 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2020, :15-24
[2]  
Dai W., 2019, PROC INT C LEARN REP, P1
[3]  
Dean J., 2012, Advances in neural information processing systems, V25
[4]   Simulating Aggregation Algorithms for Empirical Verification of Resilient and Adaptive Federated Learning [J].
Jin, Hongwei ;
Yan, Ning ;
Mortazavi, Masood .
2020 IEEE/ACM INTERNATIONAL CONFERENCE ON BIG DATA COMPUTING, APPLICATIONS AND TECHNOLOGIES (BDCAT 2020), 2020, :124-133
[5]   Client Selection with Bandwidth Allocation in Federated Learning [J].
Kuang, Junqian ;
Yang, Miao ;
Zhu, Hongbin ;
Qian, Hua .
2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,
[6]   Adaptive Transmission Scheduling in Wireless Networks for Asynchronous Federated Learning [J].
Lee, Hyun-Suk ;
Lee, Jang-Won .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2021, 39 (12) :3673-3687
[7]  
McMahan HB, 2014, ADV NEUR IN, V27
[8]  
McMahan HB, 2017, PR MACH LEARN RES, V54, P1273
[9]  
Shi Y., 2021, Mobile Edge Artificial Intelligence: Opportunities and Challenges
[10]   Asynchronous Federated Learning Over Wireless Communication Networks [J].
Wang, Zhongyu ;
Zhang, Zhaoyang ;
Tian, Yuqing ;
Yang, Qianqian ;
Shan, Hangguan ;
Wang, Wei ;
Quek, Tony Q. S. .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2022, 21 (09) :6961-6978