Client selection optimization method based on dynamic programming for federated learning

被引:0
作者
Zhang, Zhongliang [1 ]
Gong, Shengchen [1 ]
Wang, Yi [1 ]
Luo, Xinggang [1 ]
机构
[1] Experimental Center of Data Sciences and Intelligent Decision-Making, Hangzhou Dianzi University, Hangzhou
来源
Xitong Gongcheng Lilun yu Shijian/System Engineering Theory and Practice | 2024年 / 44卷 / 12期
基金
中国国家自然科学基金;
关键词
client selection; dynamic programming; federated learning; privacy computing; Shapley Value;
D O I
10.12011/SETP2023-0686
中图分类号
学科分类号
摘要
Federated learning is a distributed machine learning technique which enables clients with limited resources to collaboratively train models without sharing private data, effectively protecting the data privacy of the clients. Classic federated learning systems lack a strict mechanism for selecting clients, but generally use an average strategy to aggregate the local model parameters, which may lead to the inclusion poor-quality clients in the training process of the federated learning, consequently affecting the overall performance of the final models. To address the above issues, a federated learning client selection method based on dynamic programming is proposed (FedWeight). The proposed method uses the Shapley Value method to measure the contribution of each client in different communication rounds, addressing the inherent difficulty of evaluating clients’ data quality directly. Using the Shapley Value as an important measurement to dynamically select high-quality clients by the server, and then the server improves the overall performance of model by aggregating these high quality clients. To construct different federated learning scenarios, MNIST, CIFAR-10, Fashion-MNIST, EMNIST and KMNIST datasets are used in our experiments. The experimental results demonstrate that the proposed method can effectively identify high-quality clients, and the performance of the obtained final federated model is almost unaffected by poor-quality clients. Furthermore, our method exhibits significant advantages in terms of convergence speed and model stability. © 2024 Systems Engineering Society of China. All rights reserved.
引用
收藏
页码:4064 / 4083
页数:19
相关论文
共 50 条
[1]  
Wu J J, Liu G N, Wang J Y, Et al., Data intelligence: Trends and challenges[J], Systems Engineering — Theory & Practice, 40, 8, pp. 2116-2149, (2020)
[2]  
Shen Y, Zhong W J, Mei S E., The influence of privacy protection on enterprise pricing strategy[J], Systems Engineering — Theory & Practice, 42, 2, pp. 368-381, (2022)
[3]  
McMahan B, Moore E, Ramage D, Et al., Communication-efficient learning of deep networks from decentralized data[C], Artificial Intelligence and Statistics, pp. 1273-1282, (2017)
[4]  
(2020)
[5]  
Yang Q, Liu Y, Cheng Y, Et al., Federated learning[M]. Beijing: Publishing House of Electronics Industry, 2020. [5] Long G, Tan Y, Jiang J, et al. Federated learning for open banking[C], Federated Learning: Privacy and Incentive, 2020, pp. 240-254
[6]  
Antunes R S, Andre da Costa C, Kuderle A, Et al., Federated learning for healthcare: Systematic review and architecture proposal[J], ACM Transactions on Intelligent Systems and Technology (TIST), 13, 4, pp. 1-23, (2022)
[7]  
Wang P, Yang Z W, Li H J., Federated edge learning with reconfigurable intelligent surface and its application in Internet of vehicles[J], Journal on Communications, 44, 10, pp. 46-57, (2023)
[8]  
Jiang J C, Kantarci B, Oktug S, Et al., Federated learning in smart city sensing: Challenges and opportunities, Sensors, 20, 21, (2020)
[9]  
Wang Y, Li G L, Li K Y., Survey on contribution evaluation for federated learning[J], Journal of Software, 34, 3, pp. 1168-1192, (2023)
[10]  
Wang B, Dai X R, Wang W, Et al., Adversarial examples for poisoning attacks against federated learning[J], Scientia Sinica Informationis, 53, pp. 470-484, (2023)