Federated Learning With Client Selection and Gradient Compression in Heterogeneous Edge Systems

被引:7
作者
Xu, Yang [1 ,2 ]
Jiang, Zhida [1 ,2 ]
Xu, Hongli [1 ,2 ]
Wang, Zhiyuan [1 ,2 ]
Qian, Chen [3 ]
Qiao, Chunming [4 ]
机构
[1] Univ Sci & Technol China, Sch Comp Sci & Technol, Hefei 230027, Anhui, Peoples R China
[2] Univ Sci & Technol China, Suzhou Inst Adv Res, Suzhou 215123, Jiangsu, Peoples R China
[3] Univ Calif Santa Cru, Jack Baskin Sch Engn, Dept Comp Sci & Engn, Santa Cru, CA 95064 USA
[4] Univ Buffalo State Univ New York, Dept Comp Sci & Engn, Buffalo, NY 14260 USA
关键词
Edge computing; federated learning; capability heterogeneity; statistical heterogeneity; RESOURCE-ALLOCATION; COMMUNICATION; QUANTIZATION; IOT;
D O I
10.1109/TMC.2023.3309497
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) has recently gained tremendous attention in edge computing and Internet of Things, due to its capability of enabling distributed clients to cooperatively train models while keeping raw data locally. However, the existing works usually suffer from limited communication resources, dynamic network conditions and heterogeneous client properties, which hinder efficient FL. To simultaneously tackle the above challenges, we propose a heterogeneity-aware FL framework, called FedCG, with adaptive client selection and gradient compression. Specifically, FedCG introduces diversity to client selection and aims to select a representative client subset considering statistical heterogeneity. These selected clients are assigned different compression ratios based on heterogeneous and time-varying capabilities. After local training, they upload sparse model updates matching their capabilities for global aggregation, which can effectively reduce the communication cost and mitigate the straggler effect. More importantly, instead of naively combining client selection and gradient compression, we highlight that their decisions are tightly coupled and indicate the necessity of joint optimization. We theoretically analyze the impact of both client selection and gradient compression on convergence performance. Guided by the convergence rate, we develop an iteration-based algorithm to jointly optimize client selection and compression ratio decision using submodular maximization and linear programming. On this basis, we propose the quantized extension of FedCG, termed Q-FedCG, which further adjusts quantization levels based on gradient innovation. Extensive experiments on both real-world prototypes and simulations show that FedCG and its extension can provide up to 6.4x speedup.
引用
收藏
页码:5446 / 5461
页数:16
相关论文
共 66 条
[31]   Tackling System and Statistical Heterogeneity for Federated Learning with Adaptive Client Sampling [J].
Luo, Bing ;
Xiao, Wenli ;
Wang, Shiqiang ;
Huang, Jianwei ;
Tassiulas, Leandros .
IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (IEEE INFOCOM 2022), 2022, :1739-1748
[32]   Cost-Effective Federated Learning Design [J].
Luo, Bing ;
Li, Xiang ;
Wang, Shiqiang ;
Huang, Jianwei ;
Tassiula, Leandros .
IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (IEEE INFOCOM 2021), 2021,
[33]   Adaptive Batch Size for Federated Learning in Resource-Constrained Edge Computing [J].
Ma, Zhenguo ;
Xu, Yang ;
Xu, Hongli ;
Meng, Zeyu ;
Huang, Liusheng ;
Xue, Yinxing .
IEEE TRANSACTIONS ON MOBILE COMPUTING, 2023, 22 (01) :37-53
[34]   Communication-Efficient Federated Learning with Adaptive Quantization [J].
Mao, Yuzhu ;
Zhao, Zihao ;
Yan, Guangfeng ;
Liu, Yang ;
Lan, Tian ;
Song, Linqi ;
Ding, Wenbo .
ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2022, 13 (04)
[35]  
McMahan HB, 2017, PR MACH LEARN RES, V54, P1273
[36]  
Minoux M., 1978, Proceedings of the 8th IFIP Conference on Optimization Techniques, P234, DOI 10.1007/BFb0006528
[37]  
Mirzasoleiman B., 2020, INT C MACHINE LEARNI, V119, P6950
[38]  
Mirzasoleiman B, 2015, AAAI CONF ARTIF INTE, P1812
[39]  
Mirzasoleiman Baharan, 2016, P 30 INT C NEUR INF, P3601
[40]  
Mitchell S., 2011, PuLP: a linear programming toolkit for python, V65