FedStar: Efficient Federated Learning on Heterogeneous Communication Networks

被引:2
作者
Cao, Jing [1 ,2 ]
Wei, Ran [2 ,3 ]
Cao, Qianyue [2 ,3 ]
Zheng, Yongchun [2 ,3 ]
Zhu, Zongwei [2 ,3 ]
Ji, Cheng [4 ]
Zhou, Xuehai [1 ,2 ]
机构
[1] Univ Sci & Technol China, Sch Comp Sci & Technol, Hefei 230026, Peoples R China
[2] Univ Sci & Technol China, Suzhou Inst Adv Res, Suzhou 215123, Peoples R China
[3] Univ Sci & Technol, Sch Software Engn, Hefei 230026, Peoples R China
[4] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Nanjing 210094, Peoples R China
关键词
Artificial intelligent (AI); edge computing; federated learning (FL); heterogeneous networks;
D O I
10.1109/TCAD.2023.3346274
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The proliferation of multimedia applications and increased computing power of mobile devices have led to the development of personalized artificial intelligent (AI) applications that utilize the massive user-information residing on them. However, the traditional centralized training paradigm is not applicable in this scenario due to potential privacy risks and high communication overhead. Federated learning (FL) provides an option to these applications. Nevertheless, the heterogeneity of computing and communication latency among devices have posed great challenges to building efficient learning frameworks. Existing optimizations on FL either fail to speed up training on heterogeneous devices or suffer from poor communication efficiency. In this article, we propose FedStar, an efficient FL framework that supports decentralized asynchronous training on heterogeneous communication networks. Considering the heterogeneous computing power in the network, FedStar supports running heterogeneity-aware local steps on each device. What is more, considering the heterogeneous communication latency and possibly unreachable communication path between some devices, FedStar generates a decentralized communication topology that can achieve maximal training throughput. Finally, it adopts weighted aggregation to guarantee high convergence accuracy of global model. Theoretical analysis results show the convergence behavior of FedStar under nonconvex settings. Experimental results show that FedStar can achieve a speedup of 4.81 x than the state-of-the-art FL schemes with high convergence accuracy.
引用
收藏
页码:1848 / 1861
页数:14
相关论文
共 61 条
[51]  
Wang J., 2023, Tackling the objective inconsistency problem in heterogeneous federated optimization
[52]  
Wang J., 2020, NEURIPS 2020, V33, P7611
[53]   High-Throughput CNN Inference on Embedded ARM Big.LITTLE Multicore Processors [J].
Wang, Siqi ;
Ananthanarayanan, Gayathri ;
Zeng, Yifan ;
Goel, Neeraj ;
Pathania, Anuj ;
Mitra, Tulika .
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2020, 39 (10) :2254-2267
[54]   Device Sampling for Heterogeneous Federated Learning: Theory, Algorithms, and Implementation [J].
Wang, Su ;
Lee, Mengyuan ;
Hosseinalipour, Seyyedali ;
Morabito, Roberto ;
Chiang, Mung ;
Brinton, Christopher G. .
IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (IEEE INFOCOM 2021), 2021,
[55]  
Wu WT, 2021, Arxiv, DOI arXiv:1910.01355
[56]  
Xie C, 2020, Arxiv, DOI arXiv:1903.03934
[57]   Helios: Heterogeneity-Aware Federated Learning with Dynamically Balanced Collaboration [J].
Xu, Zirui ;
Yu, Fuxun ;
Xiong, Jinjun ;
Chen, Xiang .
2021 58TH ACM/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2021, :997-1002
[58]   Recommendations in Smart Devices Using Federated Tensor Learning [J].
Yang, Jia ;
Fu, Cai ;
Liu, Xiao-Yang ;
Walid, Anwar .
IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (11) :8425-8437
[59]   Efficient Federated Learning for Cloud-Based AIoT Applications [J].
Zhang, Xinqian ;
Hu, Ming ;
Xia, Jun ;
Wei, Tongquan ;
Chen, Mingsong ;
Hu, Shiyan .
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2021, 40 (11) :2211-2223
[60]   A Blockchain Based Decentralized Gradient Aggregation Design for Federated Learning [J].
Zhao, Jian ;
Wu, Xin ;
Zhang, Yan ;
Wu, Yu ;
Wang, Zhi .
ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT II, 2021, 12892 :359-371