Privacy-preserving Byzantine-robust federated learning

被引:42
作者
Ma, Xu [1 ,3 ]
Zhou, Yuqing [1 ]
Wang, Laihua [1 ]
Miao, Meixia [2 ,3 ]
机构
[1] Qufu Normal Univ, Sch Cyber Sci & Engn, Qufu 273165, Shandong, Peoples R China
[2] Xian Univ Posts & Telecommun, Sch Cyberspace Secur, Xian 710061, Peoples R China
[3] Xidian Univ, State Key Lab Integrated Serv Networks ISN, Xian 710071, Peoples R China
基金
中国国家自然科学基金;
关键词
Federated learning; Privacy; Homomorphic encryption; Zero-knowledge proof; SIGNATURES;
D O I
10.1016/j.csi.2021.103561
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Robustness of federated learning has become one of the major concerns since some Byzantine adversaries, who may upload false data owning to unreliable communication channels, corrupted hardware or even malicious attacks, might be concealed in the group of the distributed worker. Meanwhile, it has been proved that membership attacks and reverse attacks against federated learning can lead to privacy leakage of the training data. To address the aforementioned challenges, we propose a privacy-preserving Byzantine-robust federated learning scheme (PBFL) which takes both the robustness of federated learning and the privacy of the workers into account. PBFL is constructed from an existing Byzantine-robust federated learning algorithm and combined with distributed Paillier encryption and zero-knowledge proof to guarantee privacy and filter out anomaly parameters from Byzantine adversaries. Finally, we prove that our scheme provides a higher level of privacy protection compared to the previous Byzantine-robust federated learning algorithms.
引用
收藏
页数:12
相关论文
共 45 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]  
[Anonymous], 2010, ELECT COMMUN
[3]  
Bernstein J., ARXIV PREPRINT ARXIV
[4]  
Blanchard P, 2017, ADV NEUR IN, V30
[5]  
Blum A., 2005, P 24 ACM SIGMOD SIGA, P128, DOI 10.1145/1065167.1065184
[6]   Practical Secure Aggregation for Privacy-Preserving Machine Learning [J].
Bonawitz, Keith ;
Ivanov, Vladimir ;
Kreuter, Ben ;
Marcedone, Antonio ;
McMahan, H. Brendan ;
Patel, Sarvar ;
Ramage, Daniel ;
Segal, Aaron ;
Seth, Karn .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :1175-1191
[7]  
Boneh D, 2003, LECT NOTES COMPUT SC, V2656, P416
[8]  
Boneh D, 2004, LECT NOTES COMPUT SC, V3027, P56
[9]  
Brakerski Z, 2010, LECT NOTES COMPUT SC, V6223, P1, DOI 10.1007/978-3-642-14623-7_1
[10]  
Camenisch J, 2008, LECT NOTES COMPUT SC, V5350, P234, DOI 10.1007/978-3-540-89255-7_15