SF-CABD: Secure Byzantine fault tolerance federated learning on Non-IID data

被引:3
作者
Lin, Xiaoci [1 ,3 ]
Li, Yanbin [2 ,3 ]
Xie, Xiaojun [1 ]
Ding, Yu [1 ]
Wu, Xuehui [1 ]
Ge, Chunpeng [2 ]
机构
[1] Nanjing Agr Univ, Coll Artificial Intelligence, Nanjing, Peoples R China
[2] Shandong Univ, Sch Software, Jinan, Peoples R China
[3] State Key Lab Cryptol, POB 5159, Beijing 100878, Peoples R China
关键词
Federated learning; Byzantine robustness; Homomorphic encryption; Privacy attack; ROBUSTNESS;
D O I
10.1016/j.knosys.2024.111851
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated learning facilitates collaborative learning among multiple parties while ensuring client privacy. The vulnerability of federated learning to diverse Byzantine attacks stems from the opacity inherent in its local training processes. This issue becomes more pronounced when data among clients lacks independent and identical distribution. Concurrently, the advent of gradient inversion attacks introduces an escalating risk to the confidentiality and integrity of the federated learning system. This work introduces a framework to tackle the difficulties associated with Non-IID data within federated learning. Initially, it classifies diverse data using Euclidean distance to alleviate intra-cluster heterogeneity. A privacy-preserving Byzantine fault tolerance strategy utilizing cosine similarity is implemented within each cluster. We adopt normalizing and introduce historical momentum to the inter-cluster aggregation. The integration of homomorphic encryption ensures that clients' gradient information participates in training under ciphertext, safeguarding against gradient inversion attacks. Lastly, we conduct a comparative analysis against classical and latest algorithms under various conditions. These results validate the effectiveness of our design in enhancing robustness in Non-IID data.
引用
收藏
页数:14
相关论文
共 61 条
  • [1] The k-means Algorithm: A Comprehensive Survey and Performance Evaluation
    Ahmed, Mohiuddin
    Seraj, Raihan
    Islam, Syed Mohammed Shamsul
    [J]. ELECTRONICS, 2020, 9 (08) : 1 - 12
  • [2] Baruch M, 2019, ADV NEUR IN, V32
  • [3] Biggio B., 2012, Poisoning attacks against support vector machines, DOI DOI 10.48550/ARXIV.1206.6389
  • [4] Blanchard P, 2017, ADV NEUR IN, V30
  • [5] Brakerski Zvika, 2014, ACM Transactions on Computation Theory, V6, DOI 10.1145/2633600
  • [6] Federated learning with hierarchical clustering of local updates to improve training on non-IID data
    Briggs, Christopher
    Fan, Zhong
    Andras, Peter
    [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [7] Cai ZJ, 2024, Arxiv, DOI arXiv:2403.00873
  • [8] Calinski T., 1974, COMMUN STAT-THEOR M, V3, P1, DOI DOI 10.1080/03610927408827101
  • [9] Cao XY, 2022, Arxiv, DOI arXiv:2012.13995
  • [10] When Homomorphic Encryption Marries Secret Sharing: Secure Large-Scale Sparse Logistic Regression and Applications in Risk Control
    Chen, Chaochao
    Zhou, Jun
    Wang, Li
    Wu, Xibin
    Fang, Wenjing
    Tan, Jin
    Wang, Lei
    Liu, Alex X.
    Wang, Hao
    Hong, Cheng
    [J]. KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 2652 - 2662