HFedCWA: heterogeneous federated learning algorithm based on contribution-weighted aggregation

被引:1
作者
Du, Jiawei [1 ]
Wang, Huaijun [1 ]
Li, Junhuai [1 ]
Wang, Kan [1 ]
Fei, Rong [1 ]
机构
[1] Xian Univ Technol, Sch Comp Sci & Engn, 5 Jinhua South Rd, Xian 710048, Shaanxi, Peoples R China
基金
中国国家自然科学基金;
关键词
Federated learning; Data heterogeneity; Aggregation algorithms; Personalized methods; Differential privacy;
D O I
10.1007/s10489-024-06123-4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The aim of heterogeneous federated learning (HFL) is to address the issues of data heterogeneity, computational resource disparity, and model generalizability and security in federated learning (FL). To facilitate the collaborative training of data and enhance the predictive performance of models, a heterogeneous federated learning algorithm based on contribution-weighted aggregation (HFedCWA) is proposed in this paper. First, weights are assigned on the basis of the distribution differences and quantities of heterogeneous device data, and a contribution-based weighted aggregation method is introduced to dynamically adjust weights and balance data heterogeneity. Second, personalized strategies based on regularization are formulated for heterogeneous devices with different weights, enabling each device to participate in the overall task in an optimal manner. Differential privacy methods are concurrently utilized in FL training to further enhance the security of the system. Finally, experiments are conducted under various data heterogeneity scenarios using the MNIST and CIFAR10 datasets, and the results demonstrate that the HFedCWA can effectively improve the model's generalizability ability and adaptability to heterogeneous data, thereby enhancing the overall efficiency and performance of the HFL system.
引用
收藏
页数:16
相关论文
共 35 条
[1]  
Acar DAE, 2021, Arxiv, DOI [arXiv:2111.04263, 10.48550/ARXIV.2111.04263]
[2]   Gradient-based defense methods for data leakage in vertical federated learning [J].
Chang, Wenhan ;
Zhu, Tianqing .
COMPUTERS & SECURITY, 2024, 139
[3]   FedRight: An effective model copyright protection for federated learning [J].
Chen, Jinyin ;
Li, Mingjun ;
Cheng, Yao ;
Zheng, Haibin .
COMPUTERS & SECURITY, 2023, 135
[4]   PERSONALIZED FEDERATED LEARNING WITH ATTENTION-BASED CLIENT SELECTION [J].
Chen, Zihan ;
Li, Jundong ;
Shen, Cong .
2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, :6930-6934
[5]  
Dash B, 2022, International Journal of Software Engineering & Applications, V13, P1, DOI 10.5121/ijsea.2022.13401
[6]  
Dinh CT, 2020, ADV NEUR IN, V33
[7]  
Fan Mo, 2021, MobiSys '21: Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services, P94, DOI 10.1145/3458864.3466628
[8]   Model complexity of deep learning: a survey [J].
Hu, Xia ;
Chu, Lingyang ;
Pei, Jian ;
Liu, Weiqing ;
Bian, Jiang .
KNOWLEDGE AND INFORMATION SYSTEMS, 2021, 63 (10) :2585-2619
[9]   Privacy-Preserving federated learning: An application for big data load forecast in buildings [J].
Khalil, Maysaa ;
Esseghir, Moez ;
Boulahia, Leila Merghem .
COMPUTERS & SECURITY, 2023, 131
[10]   Embracing Federated Learning: Enabling Weak Client Participation via Partial Model Training [J].
Lee, Sunwoo ;
Zhang, Tuo ;
Prakash, Saurav ;
Niu, Yue ;
Avestimehr, Salman .
IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (12) :11133-11143