Privacy-Enhanced Federated Learning Against Poisoning Adversaries

被引:179
作者
Liu, Xiaoyuan [1 ]
Li, Hongwei [1 ,2 ]
Xu, Guowen [2 ,3 ]
Chen, Zongqi [1 ]
Huang, Xiaoming [4 ]
Lu, Rongxing [5 ]
机构
[1] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Chengdu 611731, Peoples R China
[2] Chinese Acad Sci, Chongqing Inst Green & Intelligent Technol, Chongqing Key Lab Big Data & Intelligent Comp, Chongqing, Peoples R China
[3] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore 639798, Singapore
[4] CETC Cyberspace Secur Res Inst Co Ltd, Chengdu 610041, Peoples R China
[5] Univ New Brunswick UNB, Fac Comp Sci FCS, Fredericton, NB E3B 5A3, Canada
基金
中国国家自然科学基金;
关键词
Training; Data privacy; Privacy; Servers; Security; Computational modeling; Data models; Federated learning; poisoning attack; privacy protection; cloud computing; SECURE;
D O I
10.1109/TIFS.2021.3108434
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Federated learning (FL), as a distributed machine learning setting, has received considerable attention in recent years. To alleviate privacy concerns, FL essentially promises that multiple parties jointly train the model by exchanging gradients rather than raw data. However, intrinsic privacy issue still exists in FL, e.g., user's training samples could be revealed by solely inferring gradients. Moreover, the emerging poisoning attack also poses a crucial security threat to FL. In particular, due to the distributed nature of FL, malicious users may submit crafted gradients during the training process to undermine the integrity and availability of the model. Furthermore, there exists a contradiction in simultaneously addressing two issues, that is, privacy-preserving FL solutions are dedicated to ensuring gradients indistinguishability, whereas the defenses against poisoning attacks tend to remove outliers based on their similarity. To solve such a dilemma, in this paper, we aim to build a bridge between the two issues. Specifically, we present a privacy-enhanced FL (PEFL) framework that adopts homomorphic encryption as the underlying technology and provides the server with a channel to punish poisoners via the effective gradient data extraction of the logarithmic function. To the best of our knowledge, the PEFL is the first effort to efficiently detect the poisoning behaviors in FL under ciphertext. Detailed theoretical analyses illustrate the security and convergence properties of the scheme. Moreover, the experiments conducted on real-world datasets show that the PEFL can effectively defend against label-flipping and backdoor attacks, two representative poisoning attacks in FL.
引用
收藏
页码:4574 / 4588
页数:15
相关论文
共 42 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]  
Bagdasaryan E, 2020, PR MACH LEARN RES, V108, P2938
[3]  
Baruch M, 2019, ADV NEUR IN, V32
[4]   Secure Single-Server Aggregation with (Poly)Logarithmic Overhead [J].
Bell, James Henry ;
Bonawitz, Kallista A. ;
Gascon, Adria ;
Lepoint, Tancrede ;
Raykova, Mariana .
CCS '20: PROCEEDINGS OF THE 2020 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2020, :1253-1269
[5]  
Benesty J, 2009, SPRINGER TOP SIGN PR, V2, P37, DOI 10.1007/978-3-642-00296-0_5
[6]  
Biggio B., 2012, P 29 INT C MACH LEAR
[7]  
Blanchard P, 2017, ADV NEUR IN, V30
[8]   Practical Secure Aggregation for Privacy-Preserving Machine Learning [J].
Bonawitz, Keith ;
Ivanov, Vladimir ;
Kreuter, Ben ;
Marcedone, Antonio ;
McMahan, H. Brendan ;
Patel, Sarvar ;
Ramage, Daniel ;
Segal, Aaron ;
Seth, Karn .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :1175-1191
[9]   Efficient Multi-Key Homomorphic Encryption with Packed Ciphertexts with Application to Oblivious Neural Network Inference [J].
Chen, Hao ;
Dai, Wei ;
Kim, Miran ;
Song, Yongsoo .
PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, :395-412