SIREN+: Robust Federated Learning With Proactive Alarming and Differential Privacy

被引:1
作者
Guo, Hanxi [1 ]
Wang, Hao [2 ]
Song, Tao [1 ]
Hua, Yang [3 ]
Ma, Ruhui [1 ]
Jin, Xiulang [4 ]
Xue, Zhengui [5 ]
Guan, Haibing [1 ]
机构
[1] Shanghai Jiao Tong Univ, Sch Elect Informat & Elect Engn, Shanghai Key Lab Scalable Comp & Syst, Shanghai 200240, Peoples R China
[2] Louisiana State Univ, Comp Sci & Engn, Baton Rouge, LA 70803 USA
[3] Queens Univ Belfast, EEECS ECIT, Belfast BT7 1NN, North Ireland
[4] Huawei Technol Co Ltd, Hangzhou 310000, Peoples R China
[5] Queens Univ Belfast, Sch Math & Phys, Belfast BT7 1NN, North Ireland
基金
中国国家自然科学基金;
关键词
Data models; Servers; Analytical models; Training; Computational modeling; Adaptation models; Federated learning; Byzantine-robust; attack-agnostic defense system; differential privacy;
D O I
10.1109/TDSC.2024.3362534
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL), an emerging machine learning paradigm that trains a global model across distributed clients without violating data privacy, has recently attracted significant attention. However, FL's distributed nature and iterative training extensively increase the attacking surface for Byzantine and inference attacks. Existing FL defense methods can hardly protect FL from both Byzantine and inference attacks due to their fundamental conflicts. The noise injected to defend against inference attacks interferes with model weights and training data, obscuring model analysis that Byzantine-robust methods utilize to detect attacks. Besides, the practicability of existing Byzantine-robust methods is limited since they heavily rely on model analysis. In this article, we present SIREN+, a new robust FL system that defends against a wide spectrum of Byzantine attacks and inference attacks by jointly utilizing a proactive alarming mechanism and local differential privacy (LDP). The proactive alarming mechanism orchestrates clients and the FL server to collaboratively detect attacks using distributed alarms, which are free from the noise interference injected by LDP. Compared with the state-of-the-art defense methods, SIREN+ can protect FL from Byzantine and inference attacks from a higher proportion of malicious clients in the system while keeping the global model performing normally. Extensive experiments with diverse settings and attacks on real-world datasets show that SIREN+ outperforms existing defense methods when attacked by Byzantine and inference attacks.
引用
收藏
页码:4843 / 4860
页数:18
相关论文
共 68 条
  • [1] Abadi M, 2016, PROCEEDINGS OF OSDI'16: 12TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION, P265
  • [2] Deep Learning with Differential Privacy
    Abadi, Martin
    Chu, Andy
    Goodfellow, Ian
    McMahan, H. Brendan
    Mironov, Ilya
    Talwar, Kunal
    Zhang, Li
    [J]. CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, : 308 - 318
  • [3] Alistarh D, 2018, ADV NEUR IN, V31
  • [4] Andrew G, 2021, ADV NEUR IN, V34
  • [5] Bagdasaryan E, 2020, PR MACH LEARN RES, V108, P2938
  • [6] Bhagoji A. N., 2018, P NEURIPS WORKSH SEC, P1
  • [7] Bhagoji AN, 2019, PR MACH LEARN RES, V97
  • [8] Blanchard P, 2017, ADV NEUR IN, V30
  • [9] Cao XY, 2021, AAAI CONF ARTIF INTE, V35, P6885
  • [10] Cao Xiaoyu, 2020, ARXIV