Byzantine-robust federated learning with ensemble incentive mechanism

被引:0
|
作者
Zhao, Shihai [1 ,2 ]
Pu, Juncheng [1 ,2 ]
Fu, Xiaodong [1 ,2 ]
Liu, Li [1 ]
Dai, Fei [3 ]
机构
[1] Kunming Univ Sci & Technol, Fac Informat Engn & Automat, Kunming 650500, Yunnan, Peoples R China
[2] Kunming Univ Sci & Technol, Yunnan Key Lab Comp Technol Applicat, Kunming 650500, Yunnan, Peoples R China
[3] Southwest Forestry Univ, Coll Big Data & Intelligent Engn, Kunming 650224, Yunnan, Peoples R China
关键词
Federated learning; Byzantine poisoning attack; Ensemble learning; Incentive mechanism; Ensemble quality maximization; CONSENSUS; PRIVACY;
D O I
10.1016/j.future.2024.05.017
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Federated learning (FL) is vulnerable to Byzantine attacks due to its distributed nature. Existing defenses, which typically rely on server-based or trust-bootstrapped aggregation rules, often struggle to mitigate the impact when a large proportion of participants are malicious. Additionally, the absence of an effective incentive mechanism in current defenses may lead rational clients to submit meaningless or malicious updates, compromising the global model's effectiveness in federated learning. To tackle these issues, we propose a Byzantine-robust Ensemble Incentive Mechanism (BEIM) that not only leverages ensemble learning to train multiple global models, enhancing the robustness against such attacks, but also establishes a novel incentive mechanism to promote honest participation. Specifically, Byzantine robustness of BEIM can be initially enhanced by motivating high-quality clients to participate in FL. A distance-based aggregation rule is then employed to diminish the influence of malicious clients. Subsequently, the integration of a majority voting scheme across the ensemble models further isolates and dilutes the impact of malicious updates. The properties of truthfulness, individual rationality, and budget feasibility of the incentive mechanism in BEIM are proved theoretically. Empirical results demonstrate that BEIM not only effectively counters the impact of malicious clients, enhancing test accuracy by up to 77.3% compared to existing baselines when over 50% of the clients are malicious, but also fairly rewards clients based on the quality of their contributions.
引用
收藏
页码:272 / 283
页数:12
相关论文
共 50 条
  • [31] Communication-Efficient and Byzantine-Robust Differentially Private Federated Learning
    Li, Min
    Xiao, Di
    Liang, Jia
    Huang, Hui
    IEEE COMMUNICATIONS LETTERS, 2022, 26 (08) : 1725 - 1729
  • [32] Byzantine-robust federated learning over Non-IID data
    Ma X.
    Li Q.
    Jiang Q.
    Ma Z.
    Gao S.
    Tian Y.
    Ma J.
    Tongxin Xuebao/Journal on Communications, 2023, 44 (06): : 138 - 153
  • [33] Distance-Statistical based Byzantine-robust algorithms in Federated Learning
    Colosimo, Francesco
    De Rango, Floriano
    2024 IEEE 21ST CONSUMER COMMUNICATIONS & NETWORKING CONFERENCE, CCNC, 2024, : 1034 - 1035
  • [34] Byzantine-robust Federated Learning through Collaborative Malicious Gradient Filtering
    Xu, Jian
    Huang, Shao-Lun
    Song, Linqi
    Lan, Tian
    2022 IEEE 42ND INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS 2022), 2022, : 1223 - 1235
  • [35] Byzantine-Robust Multimodal Federated Learning Framework for Intelligent Connected Vehicle
    Wu, Ning
    Lin, Xiaoming
    Lu, Jianbin
    Zhang, Fan
    Chen, Weidong
    Tang, Jianlin
    Xiao, Jing
    ELECTRONICS, 2024, 13 (18)
  • [36] FedInv: Byzantine-Robust Federated Learning by Inversing Local Model Updates
    Zhao, Bo
    Sun, Peng
    Wang, Tao
    Jiang, Keyu
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 9171 - 9179
  • [37] BFLMeta: Blockchain-Empowered Metaverse with Byzantine-Robust Federated Learning
    Vu Tuan Truong
    Hoang, Duc N. M.
    Long Bao Le
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 5537 - 5542
  • [38] Using Third-Party Auditor to Help Federated Learning: An Efficient Byzantine-Robust Federated Learning
    Zhang, Zhuangzhuang
    Wu, Libing
    He, Debiao
    Li, Jianxin
    Lu, Na
    Wei, Xuejiang
    IEEE TRANSACTIONS ON SUSTAINABLE COMPUTING, 2024, 9 (06): : 848 - 861
  • [39] Defense against local model poisoning attacks to byzantine-robust federated learning
    Shiwei Lu
    Ruihu Li
    Xuan Chen
    Yuena Ma
    Frontiers of Computer Science, 2022, 16
  • [40] Efficient Byzantine-Robust and Privacy-Preserving Federated Learning on Compressive Domain
    Hu, Guiqiang
    Li, Hongwei
    Fan, Wenshu
    Zhang, Yushu
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (04): : 7116 - 7127