Joint Device Scheduling and Resource Allocation for Latency Constrained Wireless Federated Learning

被引:263
作者
Shi, Wenqi [1 ]
Zhou, Sheng [1 ]
Niu, Zhisheng [1 ]
Jiang, Miao [2 ]
Geng, Lu [2 ]
机构
[1] Tsinghua Univ, Dept Elect Engn, Beijing Natl Res Ctr Informat Sci & Technol, Beijing 100084, Peoples R China
[2] Hitachi China Res & Dev Cooperat, Beijing 100190, Peoples R China
关键词
Federated learning; wireless networks; resource allocation; scheduling; convergence analysis;
D O I
10.1109/TWC.2020.3025446
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In federated learning (FL), devices contribute to the global training by uploading their local model updates via wireless channels. Due to limited computation and communication resources, device scheduling is crucial to the convergence rate of FL. In this paper, we propose a joint device scheduling and resource allocation policy to maximize the model accuracy within a given total training time budget for latency constrained wireless FL. A lower bound on the reciprocal of the training performance loss, in terms of the number of training rounds and the number of scheduled devices per round, is derived. Based on the bound, the accuracy maximization problem is solved by decoupling it into two sub-problems. First, given the scheduled devices, the optimal bandwidth allocation suggests allocating more bandwidth to the devices with worse channel conditions or weaker computation capabilities. Then, a greedy device scheduling algorithm is introduced, which selects the device consuming the least updating time obtained by the optimal bandwidth allocation in each step, until the lower bound begins to increase, meaning that scheduling more devices will degrade the model accuracy. Experiments show that the proposed policy outperforms state-of-the-art scheduling policies under extensive settings of data distributions and cell radius.
引用
收藏
页码:453 / 467
页数:15
相关论文
共 34 条
[1]   Update Aware Device Scheduling for Federated Learning at the Wireless Edge [J].
Amiri, Mohammad Mohammadi ;
Gunduz, Deniz ;
Kulkarni, Sanjeev R. ;
Poor, H. Vincent .
2020 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT), 2020, :2598-2603
[2]  
Amiri MM, 2019, IEEE INT SYMP INFO, P1432, DOI [10.1109/tsp.2020.2981904, 10.1109/ISIT.2019.8849334]
[3]  
[Anonymous], CISCO GLOBAL CLOUD I
[4]  
Bonawitz Keith, 2019, P MACHINE LEARNING S, P374, DOI 10.48550/arXiv.1902.01046
[5]  
Chen M., 2019, CoRR
[6]  
Deyan Chen, 2012, Proceedings of the 2012 International Conference on Computer Science and Electronics Engineering (ICCSEE 2012), P647, DOI 10.1109/ICCSEE.2012.193
[7]  
Goodfellow I, 2016, ADAPT COMPUT MACH LE, P1
[8]  
Krizhevsky A., 2018, THE CIFAR 10 DATASET
[9]   Gradient-based learning applied to document recognition [J].
Lecun, Y ;
Bottou, L ;
Bengio, Y ;
Haffner, P .
PROCEEDINGS OF THE IEEE, 1998, 86 (11) :2278-2324
[10]   Speeding Up Distributed Machine Learning Using Codes [J].
Lee, Kangwook ;
Lam, Maximilian ;
Pedarsani, Ramtin ;
Papailiopoulos, Dimitris ;
Ramchandran, Kannan .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2018, 64 (03) :1514-1529