Adaptive Heterogeneous Client Sampling for Federated Learning Over Wireless Networks

被引:0
作者
Luo, Bing [1 ,2 ]
Xiao, Wenli [3 ]
Wang, Shiqiang [4 ]
Huang, Jianwei [5 ]
Tassiulas, Leandros [6 ,7 ]
机构
[1] Duke Kunshan Univ, Data Sci Res Ctr, Kunshan 215316, Peoples R China
[2] Peng Cheng Lab PCL, Shenzhen 518066, Peoples R China
[3] Carnegie Mellon Univ, Inst Robot, Pittsburgh, PA 15213 USA
[4] IBM TJ Watson Res Ctr, Yorktown Hts, NY 10598 USA
[5] Chinese Univ Hong Kong, Shenzhen Inst Artificial Intelligence & Robot Soc, Sch Sci & Engn, Shenzhen 518172, Peoples R China
[6] Yale Univ, Dept Elect Engn, New Haven, CT 06520 USA
[7] Yale Univ, Inst Network Sci, New Haven, CT 06520 USA
基金
中国国家自然科学基金;
关键词
Convergence; Bandwidth; Training; Wireless networks; Prototypes; Optimization; Probability; Client sampling; convergence analysis; federated learning; optimization algorithm; statistical heterogeneity; system heterogeneity; wireless networks; OPTIMIZATION;
D O I
10.1109/TMC.2024.3368473
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) algorithms usually sample a fraction of clients in each round (partial participation) when the number of participants is large and the server's communication bandwidth is limited. Recent works on the convergence analysis of FL have focused on unbiased client sampling, e.g., sampling uniformly at random, which suffers from slow wall-clock time for convergence due to high degrees of system heterogeneity (e.g., diverse computation and communication capacities) and statistical heterogeneity (e.g., unbalanced and non-i.i.d. data). This article aims to design an adaptive client sampling algorithm for FL over wireless networks that tackles both system and statistical heterogeneity to minimize the wall-clock convergence time. We obtain a new tractable convergence bound for FL algorithms with arbitrary client sampling probability. Based on the bound, we analytically establish the relationship between the total learning time and sampling probability with an adaptive bandwidth allocation scheme, which results in a non-convex optimization problem. We design an efficient algorithm for learning the unknown parameters in the convergence bound and develop a low-complexity algorithm to approximately solve the non-convex problem. Our solution reveals the impact of system and statistical heterogeneity parameters on the optimal client sampling design. Moreover, our solution shows that as the number of sampled clients increases, the total convergence time first decreases and then increases because a larger sampling number reduces the number of rounds for convergence but results in a longer expected time per-round due to limited wireless bandwidth. Experimental results from both hardware prototype and simulation demonstrate that our proposed sampling scheme significantly reduces the convergence time compared to several baseline sampling schemes. Notably, for EMNIST dataset, our scheme in hardware prototype spends 71% less time than the baseline uniform sampling for reaching the same target loss.
引用
收藏
页码:9663 / 9677
页数:15
相关论文
共 50 条
  • [41] Compressed Hierarchical Federated Learning for Edge-Level Imbalanced Wireless Networks
    Liu, Yuan
    Qu, Zhe
    Wang, Jianxin
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2025,
  • [42] Optimizing Privacy and Latency Tradeoffs in Split Federated Learning Over Wireless Networks
    Lee, Joohyung
    Seif, Mohamed
    Cho, Jungchan
    Poor, H. Vincent
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2024, 13 (12) : 3439 - 3443
  • [43] Adaptive Modulation for Wireless Federated Edge Learning
    Xu, Xinyi
    Yu, Guanding
    Liu, Shengli
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2023, 9 (04) : 1096 - 1109
  • [44] Adaptive Network Pruning for Wireless Federated Learning
    Liu, Shengli
    Yu, Guanding
    Yin, Rui
    Yuan, Jiantao
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2021, 10 (07) : 1572 - 1576
  • [45] FedHelo: Hierarchical Federated Learning With Loss-Based-Heterogeneity in Wireless Networks
    Ye, Yuchuan
    Chen, Youjia
    Yang, Junnan
    Ding, Ming
    Cheng, Peng
    Zheng, Haifeng
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2024, 11 (06): : 6066 - 6079
  • [46] Faster Convergence With Less Communication: Broadcast-Based Subgraph Sampling for Decentralized Learning Over Wireless Networks
    Herrera, Daniel Perez
    Chen, Zheng
    Larsson, Erik G.
    IEEE OPEN JOURNAL OF THE COMMUNICATIONS SOCIETY, 2025, 6 : 1497 - 1511
  • [47] Differentially Private Federated Learning With Importance Client Sampling
    Chen, Lin
    Ding, Xiaofeng
    Li, Mengqi
    Jin, Hai
    IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2024, 70 (01) : 3635 - 3649
  • [48] Latency Minimization for Wireless Federated Learning With Heterogeneous Local Model Updates
    Zhu, Jingyang
    Shi, Yuanming
    Fu, Min
    Zhou, Yong
    Wu, Youlong
    Fu, Liqun
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (01) : 444 - 461
  • [49] Fairness-Aware Multi-Server Federated Learning Task Delegation Over Wireless Networks
    Gao, Yulan
    Ren, Chao
    Yu, Han
    Xiao, Ming
    Skoglund, Mikael
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2025, 12 (02): : 684 - 697
  • [50] FedPCC: Parallelism of Communication and Computation for Federated Learning in Wireless Networks
    Zhang, Hong
    Tian, Hao
    Dong, Mianxiong
    Ota, Kaoru
    Jia, Juncheng
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2022, 6 (06): : 1368 - 1377