Two-stage personalized federated learning based on sparse pretraining

被引:0
作者
Liu, Tong [1 ]
Xie, Kaixuan [2 ]
Kong, Yi [1 ]
Chen, Guojun [1 ]
Xu, Yinfei [1 ]
Xin, Lun [2 ]
Yu, Fei [1 ]
机构
[1] Southeast Univ, Nanjing, Peoples R China
[2] China Mobile Res Inst, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
artificial intelligence; convolutional neural nets; data protection; distributed algorithms; image classification; information and communications; neural nets;
D O I
10.1049/ell2.12943
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Aiming at solving the performance degradation of federated learning (FL) under heterogeneous data distribution, personalized FL (PFL) was proposed. It is designed to produce a dedicated model for each client. However, the existing PFL solution only focuses on the performance of personalized model, ignoring the performance of global model, which will affect the willingness of new clients to participate. In order to solve this problem, this paper proposes a new PFL solution, a two-stage PFL based on sparse pretraining, which can not only train a sparse personalized model for each client, but also obtain a sparse global model. The whole training process is divided into sparse pretraining and sparse personalized training, which focus on the performance of global model and personalized model respectively. Also, we propose a mask sparse aggregation technique to maintain the sparsity of the global model in the sparse personalized training stage. Experimental results show that compared with existing algorithms, our proposed algorithm can improve the accuracy of the global model while maintaining advanced personalized model accuracy, and has higher communication efficiency. In order to address the current problem of sparse personalized federated learning where the global model is dense and poorly performing, a new solution is proposed in the authors' work. The two-stage training approach allows for a focus on the training of global and personalized models in the early and late stages, respectively. In addition, sparse mask aggregation techniques have been proposed to guarantee the sparsity of the global model. image
引用
收藏
页数:3
相关论文
共 10 条
[1]  
Bibikar S, 2022, AAAI CONF ARTIF INTE, P6080
[2]  
Evci Utku, 2019, 25 AM C INF SYST
[3]  
Frankle J., 2019, ARXIV
[4]  
Huang S., 2022, arXiv
[5]   LotteryFL: Empower Edge Intelligence with Personalized and Communication-Efficient Federated Learning [J].
Li, Ang ;
Sun, Jingwei ;
Wang, Binghui ;
Duan, Lin ;
Li, Sicheng ;
Chen, Yiran ;
Li, Hai .
2021 ACM/IEEE 6TH SYMPOSIUM ON EDGE COMPUTING (SEC 2021), 2021, :68-79
[6]   Federated Learning: Challenges, Methods, and Future Directions [J].
Li, Tian ;
Sahu, Anit Kumar ;
Talwalkar, Ameet ;
Smith, Virginia .
IEEE SIGNAL PROCESSING MAGAZINE, 2020, 37 (03) :50-60
[7]  
Liu Shiwei, 2021, P MACHINE LEARNING R, V139
[8]  
McMahan HB, 2017, PR MACH LEARN RES, V54, P1273
[9]   Toward Personalized Federated Learning [J].
Tan, Alysa Ziying ;
Yu, Han ;
Cui, Lizhen ;
Yang, Qiang .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (12) :9587-9603
[10]   Personalized Federated Learning by Structured and Unstructured Pruning under Data Heterogeneity [J].
Vahidian, Saeed ;
Morafah, Mahdi ;
Lin, Bill .
2021 IEEE 41ST INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS WORKSHOPS (ICDCSW 2021), 2021, :27-34