FedStar: Efficient Federated Learning on Heterogeneous Communication Networks

被引:2
作者
Cao, Jing [1 ,2 ]
Wei, Ran [2 ,3 ]
Cao, Qianyue [2 ,3 ]
Zheng, Yongchun [2 ,3 ]
Zhu, Zongwei [2 ,3 ]
Ji, Cheng [4 ]
Zhou, Xuehai [1 ,2 ]
机构
[1] Univ Sci & Technol China, Sch Comp Sci & Technol, Hefei 230026, Peoples R China
[2] Univ Sci & Technol China, Suzhou Inst Adv Res, Suzhou 215123, Peoples R China
[3] Univ Sci & Technol, Sch Software Engn, Hefei 230026, Peoples R China
[4] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Nanjing 210094, Peoples R China
关键词
Artificial intelligent (AI); edge computing; federated learning (FL); heterogeneous networks;
D O I
10.1109/TCAD.2023.3346274
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The proliferation of multimedia applications and increased computing power of mobile devices have led to the development of personalized artificial intelligent (AI) applications that utilize the massive user-information residing on them. However, the traditional centralized training paradigm is not applicable in this scenario due to potential privacy risks and high communication overhead. Federated learning (FL) provides an option to these applications. Nevertheless, the heterogeneity of computing and communication latency among devices have posed great challenges to building efficient learning frameworks. Existing optimizations on FL either fail to speed up training on heterogeneous devices or suffer from poor communication efficiency. In this article, we propose FedStar, an efficient FL framework that supports decentralized asynchronous training on heterogeneous communication networks. Considering the heterogeneous computing power in the network, FedStar supports running heterogeneity-aware local steps on each device. What is more, considering the heterogeneous communication latency and possibly unreachable communication path between some devices, FedStar generates a decentralized communication topology that can achieve maximal training throughput. Finally, it adopts weighted aggregation to guarantee high convergence accuracy of global model. Theoretical analysis results show the convergence behavior of FedStar under nonconvex settings. Experimental results show that FedStar can achieve a speedup of 4.81 x than the state-of-the-art FL schemes with high convergence accuracy.
引用
收藏
页码:1848 / 1861
页数:14
相关论文
共 61 条
[1]  
Hassanat AB, 2014, Arxiv, DOI arXiv:1409.0919
[2]  
Bonawitz K, 2019, Arxiv, DOI arXiv:1902.01046
[3]   HADFL: Heterogeneity-aware Decentralized Federated Learning Framework [J].
Cao, Jing ;
Lian, Zirui ;
Liu, Weihong ;
Zhu, Zongwei ;
Ji, Cheng .
2021 58TH ACM/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2021, :1-6
[4]   SAP-SGD: Accelerating Distributed Parallel Training with High Communication Efficiency on Heterogeneous Clusters [J].
Cao, Jing ;
Zhu, Zongwei ;
Zhou, Xuehai .
2021 IEEE INTERNATIONAL CONFERENCE ON CLUSTER COMPUTING (CLUSTER 2021), 2021, :94-102
[5]   Communication-Efficient Federated Deep Learning With Layerwise Asynchronous Model Update and Temporally Weighted Aggregation [J].
Chen, Yang ;
Sun, Xiaoyan ;
Jin, Yaochu .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (10) :4229-4238
[6]   The Tail at Scale [J].
Dean, Jeffrey ;
Barroso, Luiz Andre .
COMMUNICATIONS OF THE ACM, 2013, 56 (02) :74-80
[7]  
Diao E., 2020, P INT C LEARN REPR
[8]   HeteroSAg: Secure Aggregation With Heterogeneous Quantization in Federated Learning [J].
Elkordy, Ahmed Roushdy ;
Avestimehr, A. Salman .
IEEE TRANSACTIONS ON COMMUNICATIONS, 2022, 70 (04) :2372-2386
[9]   How much can k-means be improved by using better initialization and repeats? [J].
Franti, Pasi ;
Sieranoja, Sami .
PATTERN RECOGNITION, 2019, 93 :95-112
[10]   Decentralized Federated Learning Framework for the Neighborhood: A Case Study on Residential Building Load Forecasting [J].
Gao, Jiechao ;
Wang, Wenpeng ;
Liu, Zetian ;
Billah, Md Fazlay Rabbi Masum ;
Campbell, Bradford .
PROCEEDINGS OF THE 2021 THE 19TH ACM CONFERENCE ON EMBEDDED NETWORKED SENSOR SYSTEMS, SENSYS 2021, 2021, :453-459