DBFL: Dynamic Byzantine-Robust Privacy Preserving Federated Learning in Heterogeneous Data Scenario

被引:0
|
作者
Chen, Xiaoli [1 ]
Tian, Youliang [1 ,2 ]
Wang, Shuai [1 ]
Yang, Kedi [1 ]
Zhao, Wei [3 ]
Xiong, Jinbo [4 ]
机构
[1] Guizhou Univ, Coll Comp Sci & Technol, Guizhou Prov Key Lab Cryptog & Blockchain Technol, Guiyang 550025, Guizhou, Peoples R China
[2] Guizhou Univ, Coll Big Data & Informat Engn, Guiyang 550025, Guizhou, Peoples R China
[3] Guizhou Univ, Coll Math & Stat, Guiyang 550025, Guizhou, Peoples R China
[4] Fujian Normal Univ, Coll Comp & Cyber Secur, Fujian Prov Key Lab Network Secur & Cryptol, Fuzhou 350117, Fujian, Peoples R China
基金
中国国家自然科学基金;
关键词
Defense strategy; Poisoning attacks; Privacy protection; Homomorphic encryption; Federated learning;
D O I
10.1016/j.ins.2024.121849
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Privacy Preserving Federated Learning (PPFL) protects the clients' local data privacy uploading encrypted gradients to the server. However, in real-world scenarios, the heterogeneous distribution of client data makes it challenging to identify poisoning gradients. During local iterations, the models continuously move in different directions, which causes the boundary between benign and malicious gradients to persistently shift. To address these challenges, we design a Dynamic Byzantine-robust Federated Learning (DBFL) defense strategy based on Two trapdoor Homomorphic Encryption (THE), which enables the detection of encrypted poisoning attacks in heterogeneous data scenarios. Specifically, we introduce a secure Manhattan distance method that accurately measures the differences between elements in two encrypted gradients, allowing for precise detection of poisoning attacks in heterogeneous data scenarios while maintaining privacy. Furthermore, we design a Byzantine-tolerant aggregation mechanism based on dynamic threshold, where the threshold is capable of adapting to the continuously changing boundary between poisoning gradients and benign gradients in heterogeneous data scenarios. This ensures DBFL to effectively exclude poisoning gradients even when 70% of the clients are malicious and controlled by Byzantine attackers. Security analysis demonstrates that DBFL achieves IND-CPA security. Extensive evaluations on two benchmark datasets (i.e., MNIST and CIFAR-10) show that DBFL outperforms existing defense strategies. In particular, DBFL achieves 7%-40% accuracy improvement in the non-IID setting compared to existing solutions for defending against untargeted and targeted attacks.
引用
收藏
页数:17
相关论文
共 50 条
  • [31] Byzantine-robust federated learning with ensemble incentive mechanism
    Zhao, Shihai
    Pu, Juncheng
    Fu, Xiaodong
    Liu, Li
    Dai, Fei
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2024, 159 : 272 - 283
  • [32] CareFL: Contribution Guided Byzantine-Robust Federated Learning
    Dong, Qihao
    Yang, Shengyuan
    Dai, Zhiyang
    Gao, Yansong
    Wang, Shang
    Cao, Yuan
    Fu, Anmin
    Susilo, Willy
    IEEE Transactions on Information Forensics and Security, 2024, 19 : 9714 - 9729
  • [33] Towards Federated Learning with Byzantine-Robust Client Weighting
    Portnoy, Amit
    Tirosh, Yoav
    Hendler, Danny
    APPLIED SCIENCES-BASEL, 2022, 12 (17):
  • [34] BOBA: Byzantine-Robust Federated Learning with Label Skewness
    Bao, Wenxuan
    Wu, Jun
    He, Jingrui
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238
  • [35] Byzantine-Robust Federated Learning with Optimal Statistical Rates
    Zhu, Banghua
    Wang, Lun
    Pang, Qi
    Wang, Shuai
    Jiao, Jiantao
    Song, Dawn
    Jordan, Michael I.
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 206, 2023, 206
  • [36] Byzantine-Robust and Efficient Federated Learning for the Internet of Things
    Jin R.
    Hu J.
    Min G.
    Lin H.
    IEEE Internet of Things Magazine, 2022, 5 (01): : 114 - 118
  • [37] SafeML: A Privacy-Preserving Byzantine-Robust Framework for Distributed Machine Learning Training
    Mirabi, Meghdad
    Nikiel, Rene Klaus
    Binnig, Carsten
    2023 23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW 2023, 2023, : 207 - 216
  • [38] Byzantine-Robust Federated Linear Bandits
    Jadbabaie, Ali
    Li, Haochuan
    Qian, Jian
    Tian, Yi
    2022 IEEE 61ST CONFERENCE ON DECISION AND CONTROL (CDC), 2022, : 5206 - 5213
  • [39] SIREN: Byzantine-robust Federated Learning via Proactive Alarming
    Guo, Hanxi
    Wang, Hao
    Song, Tao
    Hua, Yang
    Lv, Zhangcheng
    Jin, Xiulang
    Xue, Zhengui
    Ma, Ruhui
    Guan, Haibing
    PROCEEDINGS OF THE 2021 ACM SYMPOSIUM ON CLOUD COMPUTING (SOCC '21), 2021, : 47 - 60
  • [40] Local Model Poisoning Attacks to Byzantine-Robust Federated Learning
    Fang, Minghong
    Cao, Xiaoyu
    Jia, Jinyuan
    Gong, Neil Nenqiang
    PROCEEDINGS OF THE 29TH USENIX SECURITY SYMPOSIUM, 2020, : 1623 - 1640