Byzantine-Robust Federated Learning with Variance Reduction and Differential Privacy

被引:9
作者
Zhang, Zikai [1 ]
Hu, Rui [1 ]
机构
[1] Univ Nevada, Dept Comp Sci & Engn, Reno, NV 89557 USA
来源
2023 IEEE CONFERENCE ON COMMUNICATIONS AND NETWORK SECURITY, CNS | 2023年
关键词
Federated Learning; Byzantine Attack; Differential Privacy; Model Sparsification; Variance Reduction;
D O I
10.1109/CNS59707.2023.10288938
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) is designed to preserve data privacy during model training, where the data remains on the client side (i.e., IoT devices), and only model updates of clients are shared iteratively for collaborative learning. However, this process is vulnerable to privacy attacks and Byzantine attacks: the local model updates shared throughout the FL network will leak private information about the local training data, and they can also be maliciously crafted by Byzantine attackers to disturb the learning. In this paper, we propose a new FL scheme that guarantees rigorous privacy and simultaneously enhances system robustness against Byzantine attacks. Our approach introduces sparsification- and momentum-driven variance reduction into the client-level differential privacy (DP) mechanism, to defend against Byzantine attackers. The security design does not violate the privacy guarantee of the client-level DP mechanism; hence, our approach achieves the same client-level DP guarantee as the state-of-the-art. We conduct extensive experiments on both IID and non-IID datasets and different tasks and evaluate the performance of our approach against different Byzantine attacks by comparing it with state-of-the-art defense methods. The results of our experiments show the efficacy of our framework and demonstrate its ability to improve system robustness against Byzantine attacks while achieving a strong privacy guarantee.
引用
收藏
页数:9
相关论文
共 41 条
[1]  
Andrew G., 2021, Advances in Neural Information Processing Systems
[2]  
Biswas Ari, arXiv
[3]  
Blanchard P, 2017, ADV NEUR IN, V30
[4]   Practical Secure Aggregation for Privacy-Preserving Machine Learning [J].
Bonawitz, Keith ;
Ivanov, Vladimir ;
Kreuter, Ben ;
Marcedone, Antonio ;
McMahan, H. Brendan ;
Patel, Sarvar ;
Ramage, Daniel ;
Segal, Aaron ;
Seth, Karn .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :1175-1191
[5]  
Chen YD, 2017, P ACM MEAS ANAL COMP, V1, DOI [10.1145/3130906, 10.1145/3154503]
[6]   The Algorithmic Foundations of Differential Privacy [J].
Dwork, Cynthia ;
Roth, Aaron .
FOUNDATIONS AND TRENDS IN THEORETICAL COMPUTER SCIENCE, 2013, 9 (3-4) :211-406
[7]  
Fang MH, 2020, PROCEEDINGS OF THE 29TH USENIX SECURITY SYMPOSIUM, P1623
[8]  
Geyer R.C., 2018, Differentially private federated learning: a client level perspective
[9]   Differential Privacy and Byzantine Resilience in SGD: Do They Add Up? [J].
Guerraoui, Rachid ;
Gupta, Nirupam ;
Pinot, Rafael ;
Rouault, Sebastien ;
Stephan, John .
PROCEEDINGS OF THE 2021 ACM SYMPOSIUM ON PRINCIPLES OF DISTRIBUTED COMPUTING (PODC '21), 2021, :391-401
[10]  
Guerraoui Rachid., 2018, PR MACH LEARN RES