Does Differential Privacy Really Protect Federated Learning From Gradient Leakage Attacks?

被引:9
作者
Hu, Jiahui [1 ,2 ]
Du, Jiacheng [1 ,2 ]
Wang, Zhibo [1 ,2 ]
Pang, Xiaoyi [1 ,2 ]
Zhou, Yajie [1 ,2 ]
Sun, Peng [3 ]
Ren, Kui [1 ,2 ]
机构
[1] Zhejiang Univ, State Key Lab Blockchain & Data Secur, Hangzhou 310027, Peoples R China
[2] Zhejiang Univ, Sch Cyber Sci & Technol, Hangzhou 310027, Peoples R China
[3] Hunan Univ, Coll Comp Sci & Elect Engn, Changsha 410082, Peoples R China
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Servers; Privacy; Differential privacy; Data models; Training data; Training; TV; federated learning; gradient leakage attack;
D O I
10.1109/TMC.2024.3417930
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL) is susceptible to the gradient leakage attack (GLA), which can recover local private training data from the shared gradients or model updates. To ensure privacy, differential privacy is applied in FL by clipping and adding noise to local gradients (i.e., Local Differential Privacy (LDP)) or the global model update (i.e., Central Differential Privacy (CDP)). However, the effectiveness of DP in defending GLAs needs to be thoroughly investigated since some works briefly verify that DP can guard FL against GLAs while others question its defense capability. In this paper, we empirically evaluate CDP and LDP on the resistance of GLAs, and pay close attention to the trade-offs between privacy and utility in FL. Our findings reveal that: 1) existing GLAs can be defended by CDP using a per-layer clipping strategy and LDP with a reasonable privacy guarantee and 2) both CDP and LDP ensure the trade-off between privacy and utility in training shallow model, but cannot guarantee this trade-off in deeper model training (e.g., ResNets). Triggered by the crucial role of clipping operation for DP, we propose an improved attack that incorporates the clipping operation into existing GLAs without requiring additional information. The experimental results show our attack can destruct the protection of CDP and weaken the effectiveness of LDP. Overall, our work validates the effectiveness as well as reveals the vulnerability of DP under GLAs. We hope this work can provide guidance on utilizing DP for defending against GLA in FL and inspire the design of future privacy-preserving FL.
引用
收藏
页码:12635 / 12649
页数:15
相关论文
共 67 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]  
Andrew G, 2021, ADV NEUR IN, V34
[3]  
Ba J, 2014, ACS SYM SER
[4]  
Boenisch F, 2023, Arxiv, DOI arXiv:2112.02918
[5]   Practical Secure Aggregation for Privacy-Preserving Machine Learning [J].
Bonawitz, Keith ;
Ivanov, Vladimir ;
Kreuter, Ben ;
Marcedone, Antonio ;
McMahan, H. Brendan ;
Patel, Sarvar ;
Ramage, Daniel ;
Segal, Aaron ;
Seth, Karn .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :1175-1191
[6]  
McMahan HB, 2018, Arxiv, DOI arXiv:1710.06963
[7]   Federated learning of predictive models from federated Electronic Health Records [J].
Brisimi, Theodora S. ;
Chen, Ruidi ;
Mela, Theofanie ;
Olshevsky, Alex ;
Paschalidis, Ioannis Ch. ;
Shi, Wei .
INTERNATIONAL JOURNAL OF MEDICAL INFORMATICS, 2018, 112 :59-67
[8]  
Carlini N, 2019, PROCEEDINGS OF THE 28TH USENIX SECURITY SYMPOSIUM, P267
[9]  
Castleman K.R., 1996, Digital image processing
[10]  
Chen CX, 2021, Arxiv, DOI arXiv:2111.10178