Byzantine-robust federated learning with ensemble incentive mechanism

被引:0
|
作者
Zhao, Shihai [1 ,2 ]
Pu, Juncheng [1 ,2 ]
Fu, Xiaodong [1 ,2 ]
Liu, Li [1 ]
Dai, Fei [3 ]
机构
[1] Kunming Univ Sci & Technol, Fac Informat Engn & Automat, Kunming 650500, Yunnan, Peoples R China
[2] Kunming Univ Sci & Technol, Yunnan Key Lab Comp Technol Applicat, Kunming 650500, Yunnan, Peoples R China
[3] Southwest Forestry Univ, Coll Big Data & Intelligent Engn, Kunming 650224, Yunnan, Peoples R China
关键词
Federated learning; Byzantine poisoning attack; Ensemble learning; Incentive mechanism; Ensemble quality maximization; CONSENSUS; PRIVACY;
D O I
10.1016/j.future.2024.05.017
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Federated learning (FL) is vulnerable to Byzantine attacks due to its distributed nature. Existing defenses, which typically rely on server-based or trust-bootstrapped aggregation rules, often struggle to mitigate the impact when a large proportion of participants are malicious. Additionally, the absence of an effective incentive mechanism in current defenses may lead rational clients to submit meaningless or malicious updates, compromising the global model's effectiveness in federated learning. To tackle these issues, we propose a Byzantine-robust Ensemble Incentive Mechanism (BEIM) that not only leverages ensemble learning to train multiple global models, enhancing the robustness against such attacks, but also establishes a novel incentive mechanism to promote honest participation. Specifically, Byzantine robustness of BEIM can be initially enhanced by motivating high-quality clients to participate in FL. A distance-based aggregation rule is then employed to diminish the influence of malicious clients. Subsequently, the integration of a majority voting scheme across the ensemble models further isolates and dilutes the impact of malicious updates. The properties of truthfulness, individual rationality, and budget feasibility of the incentive mechanism in BEIM are proved theoretically. Empirical results demonstrate that BEIM not only effectively counters the impact of malicious clients, enhancing test accuracy by up to 77.3% compared to existing baselines when over 50% of the clients are malicious, but also fairly rewards clients based on the quality of their contributions.
引用
收藏
页码:272 / 283
页数:12
相关论文
共 50 条
  • [1] FLGuard: Byzantine-Robust Federated Learning via Ensemble of Contrastive Models
    Lee, Younghan
    Cho, Yungi
    Han, Woorim
    Bae, Ho
    Paek, Yunheung
    COMPUTER SECURITY - ESORICS 2023, PT IV, 2024, 14347 : 65 - 84
  • [2] Byzantine-Robust Aggregation for Federated Learning with Reinforcement Learning
    Yan, Sizheng
    Du, Junping
    Xue, Zhe
    Li, Ang
    WEB AND BIG DATA, APWEB-WAIM 2024, PT IV, 2024, 14964 : 152 - 166
  • [3] AFLGuard: Byzantine-robust Asynchronous Federated Learning
    Fang, Minghong
    Liu, Jia
    Gong, Neil Zhenqiang
    Bentley, Elizabeth S.
    PROCEEDINGS OF THE 38TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE, ACSAC 2022, 2022, : 632 - 646
  • [4] Differentially Private Byzantine-Robust Federated Learning
    Ma, Xu
    Sun, Xiaoqian
    Wu, Yuduo
    Liu, Zheli
    Chen, Xiaofeng
    Dong, Changyu
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2022, 33 (12) : 3690 - 3701
  • [5] FedSuper: A Byzantine-Robust Federated Learning Under Supervision
    Zhao, Ping
    Jiang, Jin
    Zhang, Guanglin
    ACM TRANSACTIONS ON SENSOR NETWORKS, 2024, 20 (02)
  • [6] CareFL: Contribution Guided Byzantine-Robust Federated Learning
    Dong, Qihao
    Yang, Shengyuan
    Dai, Zhiyang
    Gao, Yansong
    Wang, Shang
    Cao, Yuan
    Fu, Anmin
    Susilo, Willy
    IEEE Transactions on Information Forensics and Security, 2024, 19 : 9714 - 9729
  • [7] Privacy-preserving Byzantine-robust federated learning
    Ma, Xu
    Zhou, Yuqing
    Wang, Laihua
    Miao, Meixia
    COMPUTER STANDARDS & INTERFACES, 2022, 80
  • [8] Towards Federated Learning with Byzantine-Robust Client Weighting
    Portnoy, Amit
    Tirosh, Yoav
    Hendler, Danny
    APPLIED SCIENCES-BASEL, 2022, 12 (17):
  • [9] Privacy-Preserving and Byzantine-Robust Federated Learning
    Dong, Caiqin
    Weng, Jian
    Li, Ming
    Liu, Jia-Nan
    Liu, Zhiquan
    Cheng, Yudan
    Yu, Shui
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (02) : 889 - 904
  • [10] BOBA: Byzantine-Robust Federated Learning with Label Skewness
    Bao, Wenxuan
    Wu, Jun
    He, Jingrui
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238