Byzantine-robust federated learning with ensemble incentive mechanism

被引:0
|
作者
Zhao, Shihai [1 ,2 ]
Pu, Juncheng [1 ,2 ]
Fu, Xiaodong [1 ,2 ]
Liu, Li [1 ]
Dai, Fei [3 ]
机构
[1] Kunming Univ Sci & Technol, Fac Informat Engn & Automat, Kunming 650500, Yunnan, Peoples R China
[2] Kunming Univ Sci & Technol, Yunnan Key Lab Comp Technol Applicat, Kunming 650500, Yunnan, Peoples R China
[3] Southwest Forestry Univ, Coll Big Data & Intelligent Engn, Kunming 650224, Yunnan, Peoples R China
关键词
Federated learning; Byzantine poisoning attack; Ensemble learning; Incentive mechanism; Ensemble quality maximization; CONSENSUS; PRIVACY;
D O I
10.1016/j.future.2024.05.017
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Federated learning (FL) is vulnerable to Byzantine attacks due to its distributed nature. Existing defenses, which typically rely on server-based or trust-bootstrapped aggregation rules, often struggle to mitigate the impact when a large proportion of participants are malicious. Additionally, the absence of an effective incentive mechanism in current defenses may lead rational clients to submit meaningless or malicious updates, compromising the global model's effectiveness in federated learning. To tackle these issues, we propose a Byzantine-robust Ensemble Incentive Mechanism (BEIM) that not only leverages ensemble learning to train multiple global models, enhancing the robustness against such attacks, but also establishes a novel incentive mechanism to promote honest participation. Specifically, Byzantine robustness of BEIM can be initially enhanced by motivating high-quality clients to participate in FL. A distance-based aggregation rule is then employed to diminish the influence of malicious clients. Subsequently, the integration of a majority voting scheme across the ensemble models further isolates and dilutes the impact of malicious updates. The properties of truthfulness, individual rationality, and budget feasibility of the incentive mechanism in BEIM are proved theoretically. Empirical results demonstrate that BEIM not only effectively counters the impact of malicious clients, enhancing test accuracy by up to 77.3% compared to existing baselines when over 50% of the clients are malicious, but also fairly rewards clients based on the quality of their contributions.
引用
收藏
页码:272 / 283
页数:12
相关论文
共 50 条
  • [21] FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping
    Cao, Xiaoyu
    Fang, Minghong
    Liu, Jia
    Gong, Neil Zhenqiang
    28TH ANNUAL NETWORK AND DISTRIBUTED SYSTEM SECURITY SYMPOSIUM (NDSS 2021), 2021,
  • [22] An Experimental Study of Byzantine-Robust Aggregation Schemes in Federated Learning
    Li, Shenghui
    Ngai, Edith
    Voigt, Thiemo
    IEEE TRANSACTIONS ON BIG DATA, 2024, 10 (06) : 975 - 988
  • [23] Lightweight Byzantine-Robust and Privacy-Preserving Federated Learning
    Lu, Zhi
    Lu, Songfeng
    Cui, Yongquan
    Wu, Junjun
    Nie, Hewang
    Xiao, Jue
    Yi, Zepu
    EURO-PAR 2024: PARALLEL PROCESSING, PART II, EURO-PAR 2024, 2024, 14802 : 274 - 287
  • [24] Byzantine-Robust Federated Learning with Variance Reduction and Differential Privacy
    Zhang, Zikai
    Hu, Rui
    2023 IEEE CONFERENCE ON COMMUNICATIONS AND NETWORK SECURITY, CNS, 2023,
  • [25] SEAR: Secure and Efficient Aggregation for Byzantine-Robust Federated Learning
    Zhao, Lingchen
    Jiang, Jianlin
    Feng, Bo
    Wang, Qian
    Shen, Chao
    Li, Qi
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2022, 19 (05) : 3329 - 3342
  • [26] FLForest: Byzantine-robust Federated Learning through Isolated Forest
    Wang, Tao
    Zhao, Bo
    Fang, Liming
    2022 IEEE 28TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS, ICPADS, 2022, : 296 - 303
  • [27] Byzantine-robust Federated Learning via Cosine Similarity Aggregation
    Zhu, Tengteng
    Guo, Zehua
    Yao, Chao
    Tan, Jiaxin
    Dou, Songshi
    Wang, Wenrun
    Han, Zhenzhen
    COMPUTER NETWORKS, 2024, 254
  • [28] Byzantine-Robust and Communication-Efficient Personalized Federated Learning
    Zhang, Jiaojiao
    He, Xuechao
    Huang, Yue
    Ling, Qing
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2025, 73 : 26 - 39
  • [29] Byzantine-Robust and Privacy-Preserving Federated Learning With Irregular Participants
    Chen, Yinuo
    Tan, Wuzheng
    Zhong, Yijian
    Kang, Yulin
    Yang, Anjia
    Weng, Jian
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (21): : 35193 - 35205
  • [30] BRFL: A blockchain-based byzantine-robust federated learning model
    Li, Yang
    Xia, Chunhe
    Li, Chang
    Wang, Tianbo
    JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2025, 196