A Survey on Differentially Private Machine Learning [Review Article]

被引:80
作者
Gong, Maoguo [1 ]
Xie, Yu [1 ]
Pan, Ke [1 ]
Feng, Kaiyuan [1 ]
Qin, A. K. [2 ]
机构
[1] Xidian Univ, Sch Elect Engn, Xian, Peoples R China
[2] Swinburne Univ Technol, Dept Comp Sci & Software Engn, Melbourne, Vic, Australia
关键词
CLASSIFICATION;
D O I
10.1109/MCI.2020.2976185
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent years have witnessed remarkable successes of machine learning in various applications. However, machine learning models suffer from a potential risk of leaking private information contained in training data, which have attracted increasing research attention. As one of the mainstream privacy- preserving techniques, differential privacy provides a promising way to prevent the leaking of individual-level privacy in training data while preserving the quality of training data for model building. This work provides a comprehensive survey on the existing works that incorporate differential privacy with machine learning, so- called differentially private machine learning and categorizes them into two broad categories as per different differential privacy mechanisms: the Laplace/ Gaussian/exponential mechanism and the output/objective perturbation mechanism. In the former, a calibrated amount of noise is added to the non-private model and in the latter, the output or the objective function is perturbed by random noise. Particularly, the survey covers the techniques of differentially private deep learning to alleviate the recent concerns about the privacy of big data contributors. In addition, the research challenges in terms of model utility, privacy level and applications are discussed. To tackle these challenges, several potential future research directions for differentially private machine learning are pointed out. © 2020 IEEE.
引用
收藏
页码:49 / 88
页数:17
相关论文
共 128 条
[31]  
DAN F, 2009, P 41 ANN ACM S THEOR, P361
[32]   Handling data irregularities in classification: Foundations, trends, and future challenges [J].
Das, Swagatam ;
Datta, Shounak ;
Chaudhuri, Bidyut B. .
PATTERN RECOGNITION, 2018, 81 :674-693
[33]  
Dwork C, 2006, LECT NOTES COMPUT SC, V4004, P486
[34]   Calibrating noise to sensitivity in private data analysis [J].
Dwork, Cynthia ;
McSherry, Frank ;
Nissim, Kobbi ;
Smith, Adam .
THEORY OF CRYPTOGRAPHY, PROCEEDINGS, 2006, 3876 :265-284
[35]   The Algorithmic Foundations of Differential Privacy [J].
Dwork, Cynthia ;
Roth, Aaron .
FOUNDATIONS AND TRENDS IN THEORETICAL COMPUTER SCIENCE, 2013, 9 (3-4) :211-406
[36]   Boosting and Differential Privacy [J].
Dwork, Cynthia ;
Rothblum, Guy N. ;
Vadhan, Salil .
2010 IEEE 51ST ANNUAL SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE, 2010, :51-60
[37]  
Erlingsson, 2018, P INT C LEARN REPR V
[38]   Differentially private random decision forests using smooth sensitivity [J].
Fletcher, Sam ;
Islam, Md Zahidul .
EXPERT SYSTEMS WITH APPLICATIONS, 2017, 78 :16-31
[39]  
FREDRIKSON M, 2015, P 22 ACM SIGSAC C CO, P1322, DOI DOI 10.1145/2810103.2813677
[40]  
Fredrikson M, 2014, PROCEEDINGS OF THE 23RD USENIX SECURITY SYMPOSIUM, P17