Wireless Federated Learning With Hybrid Local and Centralized Training: A Latency Minimization Design

被引:18
作者
Huang, Ning [1 ,2 ]
Dai, Minghui [1 ,2 ]
Wu, Yuan [2 ,3 ,4 ]
Quek, Tony Q. S. [5 ,6 ]
Shen, Xuemin [7 ]
机构
[1] State Key Lab Internet Things Smart City, Macau, Peoples R China
[2] Univ Macau, Dept Comp & Informat Sci, Macau, Peoples R China
[3] Univ Macau, State Key Lab Internet Things Smart City, Macau, Peoples R China
[4] Zhuhai UM Sci & Technol Res Inst, Zhuhai 519031, Peoples R China
[5] Singapore Univ Technol & Design, Informat Syst Technol & Design Pillar, Singapore 487372, Singapore
[6] Natl Cheng Kung Univ, Tainan, Taiwan
[7] Univ Waterloo, Dept Elect & Comp Engn, Waterloo, ON N2L 3G1, Canada
基金
新加坡国家研究基金会; 加拿大自然科学与工程研究理事会; 中国国家自然科学基金;
关键词
Training; Servers; Data models; Computational modeling; Resource management; Convergence; Optimization; Federated learning; hybrid local and centralized training; resource allocation; COMMUNICATION; ALLOCATION;
D O I
10.1109/JSTSP.2022.3223498
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Wireless federated learning (FL) is a collaborative machine learning (ML) framework in which wireless client-devices independently train their ML models and send the locally trained models to the FL server for aggregation. In this paper, we consider the coexistence of privacy-sensitive client-devices and privacy-insensitive yet computing-resource constrained client-devices, and propose an FL framework with a hybrid centralized training and local training. Specifically, the privacy-sensitive client-devices perform local ML model training and send their local models to the FL server. Each privacy-insensitive client-device can have two options, i.e., (i) conducting a local training and then sending its local model to the FL server, and (ii) directly sending its local data to the FL server for the centralized training. The FL server, after collecting the data from the privacy-insensitive client-devices (which choose to upload the local data), conducts a centralized training with the received datasets. The global model is then generated by aggregating (i) the local models uploaded by the client-devices and (ii) the model trained by the FL server centrally. Focusing on this hybrid FL framework, we firstly analyze its convergence feature with respect to the client-devices' selections of local training or centralized training. We then formulate a joint optimization of client-devices' selections of the local training or centralized training, the FL training configuration (i.e., the number of the local iterations and the number of the global iterations), and the bandwidth allocations to the client-devices, with the objective of minimizing the overall latency for reaching the FL convergence. Despite the non-convexity of the joint optimization problem, we identify its layered structure and propose an efficient algorithm to solve it. Numerical results demonstrate the advantage of our proposed FL framework with the hybrid local and centralized training as well as our proposed algorithm, in comparison with several benchmark FL schemes and algorithms.
引用
收藏
页码:248 / 263
页数:16
相关论文
共 35 条
[31]  
Xu W, 2022, Arxiv, DOI arXiv:2206.00422
[32]   Federated Learning for 6G: Applications, Challenges, and Opportunities [J].
Yang, Zhaohui ;
Chen, Mingzhe ;
Wong, Kai-Kit ;
Poor, H. Vincent ;
Cui, Shuguang .
ENGINEERING, 2022, 8 :33-41
[33]   Energy Efficient Federated Learning Over Wireless Communication Networks [J].
Yang, Zhaohui ;
Chen, Mingzhe ;
Saad, Walid ;
Hong, Choong Seon ;
Shikh-Bahaei, Mohammad .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2021, 20 (03) :1935-1949
[34]   Monotonic Optimization in Communication and Networking Systems [J].
Zhang, Ying Jun Angela ;
Qian, Liping ;
Huang, Jianwei .
FOUNDATIONS AND TRENDS IN NETWORKING, 2012, 7 (01) :1-75
[35]   A Novel Cross Entropy Approach for Offloading Learning in Mobile Edge Computing [J].
Zhu, Shuhan ;
Xu, Wei ;
Fan, Lisheng ;
Wang, Kezhi ;
Karagiannidis, George K. .
IEEE WIRELESS COMMUNICATIONS LETTERS, 2020, 9 (03) :402-405