Enhancing Decentralized and Personalized Federated Learning With Topology Construction

被引:2
作者
Chen, Suo [1 ,2 ]
Xu, Yang [1 ,2 ]
Xu, Hongli [1 ,2 ]
Ma, Zhenguo [3 ]
Wang, Zhiyuan [1 ,2 ]
机构
[1] Univ Sci & Technol China, Sch Comp Sci & Technol, Hefei 230027, Peoples R China
[2] Univ Sci & Technol China, Suzhou Inst Adv Res, Suzhou 215123, Peoples R China
[3] Zhejiang Lab, Res Ctr Data Hub & Secur, Hangzhou 311121, Peoples R China
基金
美国国家科学基金会;
关键词
Training; Computational modeling; Topology; Data models; Network topology; Federated learning; Mobile computing; Personalized federated learning; P2P communication; topology construction; edge computing;
D O I
10.1109/TMC.2024.3367872
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The emerging Federated Learning (FL) permits all workers (e.g., mobile devices) to cooperatively train a model using their local data at the network edge. In order to avoid the possible bottleneck of conventional parameter server architecture, the decentralized federated learning (DFL) is developed on the peer-to-peer (P2P) communication. Non-IID issue is a key challenge in FL and will significantly degrade the model training performance. To this end, we propose a personalized solution called TOPFL, in which only parts of the local models (not the entire models) are shared and aggregated. Moreover, considering the limited communication bandwidth on workers, we propose a topology construction algorithm to accelerate the training process. To verify the convergence of the decentralized training framework, we theoretically analyze the impact of the data heterogeneity and topology on the convergence upper bound. Extensive simulation results show that TOPFL can achieve 2.2x speedup when reaching convergence and 5.8% higher test accuracy under the same resource consumption, compared with the baseline solutions.
引用
收藏
页码:9692 / 9707
页数:16
相关论文
共 58 条
[1]   Optimization Methods for Large-Scale Machine Learning [J].
Bottou, Leon ;
Curtis, Frank E. ;
Nocedal, Jorge .
SIAM REVIEW, 2018, 60 (02) :223-311
[2]  
Collins L, 2021, PR MACH LEARN RES, V139
[3]  
Dai R, 2022, PR MACH LEARN RES
[4]  
Das R, 2022, PR MACH LEARN RES, V180, P496
[5]  
Dekel O, 2012, J MACH LEARN RES, V13, P165
[6]  
Deng YY, 2020, Arxiv, DOI [arXiv:2003.13461, DOI 10.48550/ARXIV.2003.13461]
[7]  
Dinh CT, 2020, ADV NEUR IN, V33
[8]  
Fallah A, 2020, Arxiv, DOI arXiv:2002.07948
[9]  
Finn C, 2017, PR MACH LEARN RES, V70
[10]  
Arivazhagan MG, 2019, Arxiv, DOI arXiv:1912.00818