Concentrated Differentially Private Federated Learning With Performance Analysis

被引:22
作者
Hu, Rui [1 ]
Guo, Yuanxiong [2 ]
Gong, Yanmin [1 ]
机构
[1] Univ Texas San Antonio, Dept Elect & Comp Engn, San Antonio, TX 78249 USA
[2] Univ Texas San Antonio, Dept Informat Syst & Cyber Secur, San Antonio, TX 78249 USA
来源
IEEE OPEN JOURNAL OF THE COMPUTER SOCIETY | 2021年 / 2卷
基金
美国国家科学基金会;
关键词
Collaborative work; Servers; Privacy; Data models; Computational modeling; Training; Convergence; Federated learning; security and privacy; convergence analysis; zero-concentrated differential privacy; ATTACKS;
D O I
10.1109/OJCS.2021.3099108
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning engages a set of edge devices to collaboratively train a common model without sharing their local data and has advantage in user privacy over traditional cloud-based learning approaches. However, recent model inversion attacks and membership inference attacks have demonstrated that shared model updates during the interactive training process could still leak sensitive user information. Thus, it is desirable to provide rigorous differential privacy (DP) guarantee in federated learning. The main challenge to providing DP is to maintain high utility of federated learning model with repeatedly introduced randomness of DP mechanisms, especially when the server is not fully trusted. In this paper, we investigate how to provide DP to the most widely adopted federated learning scheme, federated averaging. Our approach combines local gradient perturbation, secure aggregation, and zero-concentrated differential privacy (zCDP) for better utility and privacy protection without a trusted server. We jointly consider the performance impacts of randomnesses introduced by the DP mechanism, client sampling and data subsampling in our approach, and theoretically analyze the convergence rate and end-to-end DP guarantee with non-convex loss functions. We also demonstrate that our proposed method has good utility-privacy trade-off through extensive numerical experiments on the real-world dataset.
引用
收藏
页码:276 / 289
页数:14
相关论文
共 40 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]  
Agarwal N, 2018, ADV NEUR IN, V31
[3]   Reconstruction Attacks Against Mobile-Based Continuous Authentication Systems in the Cloud [J].
Al-Rubaie, Mohammad ;
Chang, J. Morris .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2016, 11 (12) :2648-2663
[4]   Practical Secure Aggregation for Privacy-Preserving Machine Learning [J].
Bonawitz, Keith ;
Ivanov, Vladimir ;
Kreuter, Ben ;
Marcedone, Antonio ;
McMahan, H. Brendan ;
Patel, Sarvar ;
Ramage, Daniel ;
Segal, Aaron ;
Seth, Karn .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :1175-1191
[5]  
Brendan McMahan H., 2018, INT C LEARN REPR
[6]   Concentrated Differential Privacy: Simplifications, Extensions, and Lower Bounds [J].
Bun, Mark ;
Steinke, Thomas .
THEORY OF CRYPTOGRAPHY, TCC 2016-B, PT I, 2016, 9985 :635-658
[7]  
Chen L., 2018, P IEEE C COMM NETW S, P1
[8]   The Algorithmic Foundations of Differential Privacy [J].
Dwork, Cynthia ;
Roth, Aaron .
FOUNDATIONS AND TRENDS IN THEORETICAL COMPUTER SCIENCE, 2013, 9 (3-4) :211-406
[9]  
Folk E., 2019, IOT IS TRANSFORMING
[10]   Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures [J].
Fredrikson, Matt ;
Jha, Somesh ;
Ristenpart, Thomas .
CCS'15: PROCEEDINGS OF THE 22ND ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2015, :1322-1333