Wireless Federated Learning With Hybrid Local and Centralized Training: A Latency Minimization Design

被引:18
作者
Huang, Ning [1 ,2 ]
Dai, Minghui [1 ,2 ]
Wu, Yuan [2 ,3 ,4 ]
Quek, Tony Q. S. [5 ,6 ]
Shen, Xuemin [7 ]
机构
[1] State Key Lab Internet Things Smart City, Macau, Peoples R China
[2] Univ Macau, Dept Comp & Informat Sci, Macau, Peoples R China
[3] Univ Macau, State Key Lab Internet Things Smart City, Macau, Peoples R China
[4] Zhuhai UM Sci & Technol Res Inst, Zhuhai 519031, Peoples R China
[5] Singapore Univ Technol & Design, Informat Syst Technol & Design Pillar, Singapore 487372, Singapore
[6] Natl Cheng Kung Univ, Tainan, Taiwan
[7] Univ Waterloo, Dept Elect & Comp Engn, Waterloo, ON N2L 3G1, Canada
基金
新加坡国家研究基金会; 加拿大自然科学与工程研究理事会; 中国国家自然科学基金;
关键词
Training; Servers; Data models; Computational modeling; Resource management; Convergence; Optimization; Federated learning; hybrid local and centralized training; resource allocation; COMMUNICATION; ALLOCATION;
D O I
10.1109/JSTSP.2022.3223498
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Wireless federated learning (FL) is a collaborative machine learning (ML) framework in which wireless client-devices independently train their ML models and send the locally trained models to the FL server for aggregation. In this paper, we consider the coexistence of privacy-sensitive client-devices and privacy-insensitive yet computing-resource constrained client-devices, and propose an FL framework with a hybrid centralized training and local training. Specifically, the privacy-sensitive client-devices perform local ML model training and send their local models to the FL server. Each privacy-insensitive client-device can have two options, i.e., (i) conducting a local training and then sending its local model to the FL server, and (ii) directly sending its local data to the FL server for the centralized training. The FL server, after collecting the data from the privacy-insensitive client-devices (which choose to upload the local data), conducts a centralized training with the received datasets. The global model is then generated by aggregating (i) the local models uploaded by the client-devices and (ii) the model trained by the FL server centrally. Focusing on this hybrid FL framework, we firstly analyze its convergence feature with respect to the client-devices' selections of local training or centralized training. We then formulate a joint optimization of client-devices' selections of the local training or centralized training, the FL training configuration (i.e., the number of the local iterations and the number of the global iterations), and the bandwidth allocations to the client-devices, with the objective of minimizing the overall latency for reaching the FL convergence. Despite the non-convexity of the joint optimization problem, we identify its layered structure and propose an efficient algorithm to solve it. Numerical results demonstrate the advantage of our proposed FL framework with the hybrid local and centralized training as well as our proposed algorithm, in comparison with several benchmark FL schemes and algorithms.
引用
收藏
页码:248 / 263
页数:16
相关论文
共 35 条
[1]   Sequential quadratic programming for large-scale nonlinear optimization [J].
Boggs, PT ;
Tolle, JW .
JOURNAL OF COMPUTATIONAL AND APPLIED MATHEMATICS, 2000, 124 (1-2) :123-137
[2]   A Joint Learning and Communications Framework for Federated Learning Over Wireless Networks [J].
Chen, Mingzhe ;
Yang, Zhaohui ;
Saad, Walid ;
Yin, Changchuan ;
Poor, H. Vincent ;
Cui, Shuguang .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2021, 20 (01) :269-283
[3]   AUCTION: Automated and Quality-Aware Client Selection Framework for Efficient Federated Learning [J].
Deng, Yongheng ;
Lyu, Feng ;
Ren, Ju ;
Wu, Huaqing ;
Zhou, Yuezhi ;
Zhang, Yaoxue ;
Shen, Xuemin .
IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2022, 33 (08) :1996-2009
[4]  
Goldberg DE., 1989, GENETIC ALGORITHMS S
[5]  
Haddadpour F, 2019, Arxiv, DOI arXiv:1910.14425
[6]   FedMes: Speeding Up Federated Learning With Multiple Edge Servers [J].
Han, Dong-Jun ;
Choi, Minseok ;
Park, Jungwuk ;
Moon, Jaekyun .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2021, 39 (12) :3870-3885
[7]   Learning Oriented Cross-Entropy Approach to User Association in Load-Balanced HetNet [J].
Huang, Xietian ;
Xu, Wei ;
Xie, Guo ;
Jin, Shi ;
You, Xiaohu .
IEEE WIRELESS COMMUNICATIONS LETTERS, 2018, 7 (06) :1014-1017
[8]   Efficient Workload Allocation and User-Centric Utility Maximization for Task Scheduling in Collaborative Vehicular Edge Computing [J].
Huang, Xumin ;
Yu, Rong ;
Ye, Dongdong ;
Shu, Lei ;
Xie, Shengli .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2021, 70 (04) :3773-3787
[9]   A globally convergent algorithm for MPCC [J].
Kadrani, Abdeslam ;
Dussault, Jean Pierre ;
Benchakroun, Abdelhamid .
EURO JOURNAL ON COMPUTATIONAL OPTIMIZATION, 2015, 3 (03) :263-296
[10]   Blockchained On-Device Federated Learning [J].
Kim, Hyesung ;
Park, Jihong ;
Bennis, Mehdi ;
Kim, Seong-Lyun .
IEEE COMMUNICATIONS LETTERS, 2020, 24 (06) :1279-1283