Split Aggregation: Lightweight Privacy-Preserving Federated Learning Resistant to Byzantine Attacks

被引:1
|
作者
Lu, Zhi [1 ]
Lu, SongFeng [1 ]
Cui, YongQuan [1 ]
Tang, XueMing [1 ]
Wu, JunJun [1 ]
机构
[1] Huazhong Univ Sci & Technol, Hubei Engn Res Ctr Big Data Secur, Sch Cyber Sci & Engn, Hubei Key Lab Distributed Syst Secur, Wuhan 430074, Peoples R China
关键词
Privacy; Servers; Robustness; Benchmark testing; Vectors; Data privacy; Homomorphic encryption; Poisoning attack; federated learning; defense; privacy-preserving;
D O I
10.1109/TIFS.2024.3402993
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Federated Learning (FL), a distributed learning paradigm optimizing communication costs and enhancing privacy by uploading gradients instead of raw data, now confronts security challenges. It is particularly vulnerable to Byzantine poisoning attacks and potential privacy breaches via inference attacks. While homomorphic encryption and secure multi-party computation have been employed to design robust FL mechanisms, these predominantly rely on Euclidean distance or median-based metrics and often fall short in comprehensively defending against advanced poisoning attacks, such as adaptive attacks. Addressing this issue, our study introduces "Split-Aggregation", a lightweight privacy-preserving FL solution capable of withstanding adaptive attacks. This method maintains a computational complexity of O(d k N+k(3)) and a communication overhead of O(dN) , performing comparably to FedAvg when k=10 . Here, d represents the gradient dimension, N the number of users, and k the rank chosen during random singular value decomposition. Additionally, we utilize adaptive weight coefficients to mitigate gradient descent issues in honest users caused by non-independent and identically distributed (Non-IID) data. The proposed method's security and robustness are theoretically proven, with its complexity thoroughly analyzed. Experimental results demonstrate that at $k=10$ , this method surpasses the top-1 accuracy of current state-of-the-art robust privacy-preserving FL approaches. Moreover, opting for a smaller k significantly boosts efficiency with only marginal compromises in accuracy.
引用
收藏
页码:5575 / 5590
页数:16
相关论文
共 50 条
  • [1] Privacy-Preserving Federated Learning Resistant to Byzantine Attacks
    Mu X.-T.
    Cheng K.
    Song A.-X.
    Zhang T.
    Zhang Z.-W.
    Shen Y.-L.
    Jisuanji Xuebao/Chinese Journal of Computers, 2024, 47 (04): : 842 - 861
  • [2] A Verifiable Privacy-Preserving Federated Learning Framework Against Collusion Attacks
    Chen, Yange
    He, Suyu
    Wang, Baocang
    Feng, Zhanshen
    Zhu, Guanghui
    Tian, Zhihong
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2025, 24 (05) : 3918 - 3934
  • [3] ShieldFL: Mitigating Model Poisoning Attacks in Privacy-Preserving Federated Learning
    Ma, Zhuoran
    Ma, Jianfeng
    Miao, Yinbin
    Li, Yingjiu
    Deng, Robert H.
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 : 1639 - 1654
  • [4] Lightweight Byzantine-Robust and Privacy-Preserving Federated Learning
    Lu, Zhi
    Lu, Songfeng
    Cui, Yongquan
    Wu, Junjun
    Nie, Hewang
    Xiao, Jue
    Yi, Zepu
    EURO-PAR 2024: PARALLEL PROCESSING, PART II, EURO-PAR 2024, 2024, 14802 : 274 - 287
  • [5] Privacy-preserving Byzantine-robust federated learning
    Ma, Xu
    Zhou, Yuqing
    Wang, Laihua
    Miao, Meixia
    COMPUTER STANDARDS & INTERFACES, 2022, 80
  • [6] Toward Secure Weighted Aggregation for Privacy-Preserving Federated Learning
    He, Yunlong
    Yu, Jia
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 3475 - 3488
  • [7] APFed: Anti-Poisoning Attacks in Privacy-Preserving Heterogeneous Federated Learning
    Chen, Xiao
    Yu, Haining
    Jia, Xiaohua
    Yu, Xiangzhan
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 5749 - 5761
  • [8] A Lightweight and Accuracy-Lossless Privacy-Preserving Method in Federated Learning
    Liu, Zhen
    Yang, Changsong
    Ding, Yong
    Liang, Hai
    Wang, Yujue
    IEEE INTERNET OF THINGS JOURNAL, 2025, 12 (03): : 3118 - 3129
  • [9] Lightweight and Dynamic Privacy-Preserving Federated Learning via Functional Encryption
    Yu, Boan
    Zhao, Jun
    Zhang, Kai
    Gong, Junqing
    Qian, Haifeng
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 2496 - 2508
  • [10] Verifiable Federated Learning With Privacy-Preserving Data Aggregation for Consumer Electronics
    Xie, Haoran
    Wang, Yujue
    Ding, Yong
    Yang, Changsong
    Zheng, Haibin
    Qin, Bo
    IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2024, 70 (01) : 2696 - 2707