DBFL: Dynamic Byzantine-Robust Privacy Preserving Federated Learning in Heterogeneous Data Scenario

被引:0
|
作者
Chen, Xiaoli [1 ]
Tian, Youliang [1 ,2 ]
Wang, Shuai [1 ]
Yang, Kedi [1 ]
Zhao, Wei [3 ]
Xiong, Jinbo [4 ]
机构
[1] Guizhou Univ, Coll Comp Sci & Technol, Guizhou Prov Key Lab Cryptog & Blockchain Technol, Guiyang 550025, Guizhou, Peoples R China
[2] Guizhou Univ, Coll Big Data & Informat Engn, Guiyang 550025, Guizhou, Peoples R China
[3] Guizhou Univ, Coll Math & Stat, Guiyang 550025, Guizhou, Peoples R China
[4] Fujian Normal Univ, Coll Comp & Cyber Secur, Fujian Prov Key Lab Network Secur & Cryptol, Fuzhou 350117, Fujian, Peoples R China
基金
中国国家自然科学基金;
关键词
Defense strategy; Poisoning attacks; Privacy protection; Homomorphic encryption; Federated learning;
D O I
10.1016/j.ins.2024.121849
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Privacy Preserving Federated Learning (PPFL) protects the clients' local data privacy uploading encrypted gradients to the server. However, in real-world scenarios, the heterogeneous distribution of client data makes it challenging to identify poisoning gradients. During local iterations, the models continuously move in different directions, which causes the boundary between benign and malicious gradients to persistently shift. To address these challenges, we design a Dynamic Byzantine-robust Federated Learning (DBFL) defense strategy based on Two trapdoor Homomorphic Encryption (THE), which enables the detection of encrypted poisoning attacks in heterogeneous data scenarios. Specifically, we introduce a secure Manhattan distance method that accurately measures the differences between elements in two encrypted gradients, allowing for precise detection of poisoning attacks in heterogeneous data scenarios while maintaining privacy. Furthermore, we design a Byzantine-tolerant aggregation mechanism based on dynamic threshold, where the threshold is capable of adapting to the continuously changing boundary between poisoning gradients and benign gradients in heterogeneous data scenarios. This ensures DBFL to effectively exclude poisoning gradients even when 70% of the clients are malicious and controlled by Byzantine attackers. Security analysis demonstrates that DBFL achieves IND-CPA security. Extensive evaluations on two benchmark datasets (i.e., MNIST and CIFAR-10) show that DBFL outperforms existing defense strategies. In particular, DBFL achieves 7%-40% accuracy improvement in the non-IID setting compared to existing solutions for defending against untargeted and targeted attacks.
引用
收藏
页数:17
相关论文
共 50 条
  • [41] Byzantine-Robust Aggregation in Federated Learning Empowered Industrial IoT
    Li, Shenghui
    Ngai, Edith
    Voigt, Thiemo
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2023, 19 (02) : 1165 - 1175
  • [42] FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping
    Cao, Xiaoyu
    Fang, Minghong
    Liu, Jia
    Gong, Neil Zhenqiang
    28TH ANNUAL NETWORK AND DISTRIBUTED SYSTEM SECURITY SYMPOSIUM (NDSS 2021), 2021,
  • [43] An Experimental Study of Byzantine-Robust Aggregation Schemes in Federated Learning
    Li, Shenghui
    Ngai, Edith
    Voigt, Thiemo
    IEEE TRANSACTIONS ON BIG DATA, 2024, 10 (06) : 975 - 988
  • [44] SEAR: Secure and Efficient Aggregation for Byzantine-Robust Federated Learning
    Zhao, Lingchen
    Jiang, Jianlin
    Feng, Bo
    Wang, Qian
    Shen, Chao
    Li, Qi
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2022, 19 (05) : 3329 - 3342
  • [45] FLForest: Byzantine-robust Federated Learning through Isolated Forest
    Wang, Tao
    Zhao, Bo
    Fang, Liming
    2022 IEEE 28TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS, ICPADS, 2022, : 296 - 303
  • [46] Byzantine-robust Federated Learning via Cosine Similarity Aggregation
    Zhu, Tengteng
    Guo, Zehua
    Yao, Chao
    Tan, Jiaxin
    Dou, Songshi
    Wang, Wenrun
    Han, Zhenzhen
    COMPUTER NETWORKS, 2024, 254
  • [47] Byzantine-Robust and Communication-Efficient Personalized Federated Learning
    Zhang, Jiaojiao
    He, Xuechao
    Huang, Yue
    Ling, Qing
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2025, 73 : 26 - 39
  • [48] Byzantine-robust federated learning via credibility assessment on non- IID data
    Zhai, Kun
    Ren, Qiang
    Wang, Junli
    Yan, Chungang
    MATHEMATICAL BIOSCIENCES AND ENGINEERING, 2022, 19 (02) : 1659 - 1676
  • [49] BRFL: A blockchain-based byzantine-robust federated learning model
    Li, Yang
    Xia, Chunhe
    Li, Chang
    Wang, Tianbo
    JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2025, 196
  • [50] Communication-Efficient and Byzantine-Robust Differentially Private Federated Learning
    Li, Min
    Xiao, Di
    Liang, Jia
    Huang, Hui
    IEEE COMMUNICATIONS LETTERS, 2022, 26 (08) : 1725 - 1729