Byzantine-robust federated learning with ensemble incentive mechanism

被引:1
作者
Zhao, Shihai [1 ,2 ]
Pu, Juncheng [1 ,2 ]
Fu, Xiaodong [1 ,2 ]
Liu, Li [1 ]
Dai, Fei [3 ]
机构
[1] Kunming Univ Sci & Technol, Fac Informat Engn & Automat, Kunming 650500, Yunnan, Peoples R China
[2] Kunming Univ Sci & Technol, Yunnan Key Lab Comp Technol Applicat, Kunming 650500, Yunnan, Peoples R China
[3] Southwest Forestry Univ, Coll Big Data & Intelligent Engn, Kunming 650224, Yunnan, Peoples R China
来源
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE | 2024年 / 159卷
关键词
Federated learning; Byzantine poisoning attack; Ensemble learning; Incentive mechanism; Ensemble quality maximization; CONSENSUS; PRIVACY;
D O I
10.1016/j.future.2024.05.017
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Federated learning (FL) is vulnerable to Byzantine attacks due to its distributed nature. Existing defenses, which typically rely on server-based or trust-bootstrapped aggregation rules, often struggle to mitigate the impact when a large proportion of participants are malicious. Additionally, the absence of an effective incentive mechanism in current defenses may lead rational clients to submit meaningless or malicious updates, compromising the global model's effectiveness in federated learning. To tackle these issues, we propose a Byzantine-robust Ensemble Incentive Mechanism (BEIM) that not only leverages ensemble learning to train multiple global models, enhancing the robustness against such attacks, but also establishes a novel incentive mechanism to promote honest participation. Specifically, Byzantine robustness of BEIM can be initially enhanced by motivating high-quality clients to participate in FL. A distance-based aggregation rule is then employed to diminish the influence of malicious clients. Subsequently, the integration of a majority voting scheme across the ensemble models further isolates and dilutes the impact of malicious updates. The properties of truthfulness, individual rationality, and budget feasibility of the incentive mechanism in BEIM are proved theoretically. Empirical results demonstrate that BEIM not only effectively counters the impact of malicious clients, enhancing test accuracy by up to 77.3% compared to existing baselines when over 50% of the clients are malicious, but also fairly rewards clients based on the quality of their contributions.
引用
收藏
页码:272 / 283
页数:12
相关论文
共 49 条
[1]   Suppressing Poisoning Attacks on Federated Learning for Medical Imaging [J].
Alkhunaizi, Naif ;
Kamzolov, Dmitry ;
Takac, Martin ;
Nandakumar, Karthik .
MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT VIII, 2022, 13438 :673-683
[2]  
Alzubi J.A., 2015, Research Journal of Applied Sciences, Engineering and Technology, V11, P1336, DOI DOI 10.19026/RJASET.11.2241
[3]  
Alzubi O, 2018, INT ARAB J INF TECHN, V15, P76
[4]   An optimal pruning algorithm of classifier ensembles: dynamic programming approach [J].
Alzubi, Omar A. ;
Alzubi, Jafar A. ;
Alweshah, Mohammed ;
Qiqieh, Issa ;
Al-Shami, Sara ;
Ramachandran, Manikandan .
NEURAL COMPUTING & APPLICATIONS, 2020, 32 (20) :16091-16107
[5]  
Bagdasaryan E, 2020, PR MACH LEARN RES, V108, P2938
[6]  
Baruch M, 2019, ADV NEUR IN, V32
[7]  
Bhagoji AN, 2019, PR MACH LEARN RES, V97
[8]  
Bietti A, 2022, PR MACH LEARN RES
[9]  
Blanchard P, 2017, ADV NEUR IN, V30
[10]   FLCert: Provably Secure Federated Learning Against Poisoning Attacks [J].
Cao, Xiaoyu ;
Zhang, Zaixi ;
Jia, Jinyuan ;
Gong, Neil Zhenqiang .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 :3691-3705