Differentially Private Federated Learning With an Adaptive Noise Mechanism

被引:7
作者
Xue, Rui [1 ]
Xue, Kaiping [1 ,2 ,3 ]
Zhu, Bin [1 ]
Luo, Xinyi [1 ]
Zhang, Tianwei [4 ]
Sun, Qibin [1 ]
Lu, Jun [1 ,2 ,3 ]
机构
[1] Univ Sci & Technol China, Sch Cyber Sci & Technol, Hefei 230027, Anhui, Peoples R China
[2] Key Lab Med Elect & Digital Hlth Zhejiang Prov, Jiaxing 314001, Peoples R China
[3] Engn Res Ctr Intelligent Human Hlth Situat Awarene, Jiaxing 314001, Zhejiang, Peoples R China
[4] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore 639798, Singapore
基金
中国国家自然科学基金;
关键词
Federated learning; differential privacy; adaptive noise;
D O I
10.1109/TIFS.2023.3318944
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Federated Learning (FL) enables multiple distributed clients to collaboratively train a model with owned datasets. To avoid the potential privacy threat in FL, researchers propose the DP-FL strategy, which utilizes differential privacy (DP) to add elaborate noise to the exchanged parameters to hide privacy information. DP-FL guarantees the privacy of FL at the cost of model performance degradation. To balance the trade-off between model accuracy and security, we propose a differentially private federated learning scheme with an adaptive noise mechanism. This is challenging, as the distributed nature of FL makes it difficult to appropriately estimate sensitivity, where sensitivity is a concept in DP that determines the scale of noise. To resolve this, we design a generic method for sensitivity estimates based on local and global historical information. We also provide instances on four commonly used optimizers to verify its effectiveness. The experiments on MNIST, FMNIST and CIFAR-10 convincingly prove that our proposed scheme achieves higher accuracy while keeping high-level privacy protection compared to prior works.
引用
收藏
页码:74 / 87
页数:14
相关论文
共 41 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]   Secure Single-Server Aggregation with (Poly)Logarithmic Overhead [J].
Bell, James Henry ;
Bonawitz, Kallista A. ;
Gascon, Adria ;
Lepoint, Tancrede ;
Raykova, Mariana .
CCS '20: PROCEEDINGS OF THE 2020 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2020, :1253-1269
[3]   Practical Secure Aggregation for Privacy-Preserving Machine Learning [J].
Bonawitz, Keith ;
Ivanov, Vladimir ;
Kreuter, Ben ;
Marcedone, Antonio ;
McMahan, H. Brendan ;
Patel, Sarvar ;
Ramage, Daniel ;
Segal, Aaron ;
Seth, Karn .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :1175-1191
[4]  
McMahan HB, 2018, Arxiv, DOI arXiv:1710.06963
[5]   Calibrating noise to sensitivity in private data analysis [J].
Dwork, Cynthia ;
McSherry, Frank ;
Nissim, Kobbi ;
Smith, Adam .
THEORY OF CRYPTOGRAPHY, PROCEEDINGS, 2006, 3876 :265-284
[6]   The Algorithmic Foundations of Differential Privacy [J].
Dwork, Cynthia ;
Roth, Aaron .
FOUNDATIONS AND TRENDS IN THEORETICAL COMPUTER SCIENCE, 2013, 9 (3-4) :211-406
[7]  
Fu J., 2022, arXiv
[8]  
Geyer R.C., 2017, arXiv
[9]  
Ghazi B., 2021, proceedings of machine learning research, P3692
[10]  
Girgis AM, 2021, ADV NEUR IN, V34