Deriving an Optimal Noise Adding Mechanism for Privacy-Preserving Machine Learning

被引:13
作者
Kumar, Mohit [1 ,2 ]
Rossbory, Michael [2 ]
Moser, Bernhard A. [2 ]
Freudenthaler, Bernhard [2 ]
机构
[1] Univ Rostock, Fac Comp Sci & Elect Engn, Rostock, Germany
[2] Software Competence Ctr Hagenberg, Hagenberg, Austria
来源
DATABASE AND EXPERT SYSTEMS APPLICATIONS (DEXA 2019) | 2019年 / 1062卷
基金
欧盟地平线“2020”;
关键词
Privacy; Noise adding mechanism; Machine learning;
D O I
10.1007/978-3-030-27684-3_15
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Differential privacy is a standard mathematical framework to quantify the degree to which individual privacy in a statistical dataset is preserved. We derive an optimal (epsilon, delta)-differentially private noise adding mechanism for real-valued data matrices meant for the training of models by machine learning algorithms. The aim is to protect a machine learning algorithm from an adversary who seeks to gain an information about the data from algorithm's output by perturbing the value in a sample of the training data. The fundamental issue of trade-off between privacy and utility is addressed by presenting a novel approach consisting of three steps: (1) the sufficient conditions on the probability density function of noise for (epsilon, delta)-differential privacy of a machine learning algorithm are derived; (2) the noise distribution that, for a given level of entropy, minimizes the expected noise magnitude is derived; (3) using entropy level as the design parameter, the optimal entropy level and the corresponding probability density function of the noise are derived.
引用
收藏
页码:108 / 118
页数:11
相关论文
共 14 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]  
Balle B., 2018, ABS180506530 CORR
[3]  
Dwork C, 2006, LECT NOTES COMPUT SC, V4004, P486
[4]   Calibrating noise to sensitivity in private data analysis [J].
Dwork, Cynthia ;
McSherry, Frank ;
Nissim, Kobbi ;
Smith, Adam .
THEORY OF CRYPTOGRAPHY, PROCEEDINGS, 2006, 3876 :265-284
[5]   The Algorithmic Foundations of Differential Privacy [J].
Dwork, Cynthia ;
Roth, Aaron .
FOUNDATIONS AND TRENDS IN THEORETICAL COMPUTER SCIENCE, 2013, 9 (3-4) :211-406
[6]   Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures [J].
Fredrikson, Matt ;
Jha, Somesh ;
Ristenpart, Thomas .
CCS'15: PROCEEDINGS OF THE 22ND ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2015, :1322-1333
[7]   The Optimal Noise-Adding Mechanism in Differential Privacy [J].
Geng, Quan ;
Viswanath, Pramod .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2016, 62 (02) :925-951
[8]   Optimal Noise Adding Mechanisms for Approximate Differential Privacy [J].
Geng, Quan ;
Viswanath, Pramod .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2016, 62 (02) :952-969
[9]   The Staircase Mechanism in Differential Privacy [J].
Geng, Quan ;
Kairouz, Peter ;
Oh, Sewoong ;
Viswanath, Pramod .
IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2015, 9 (07) :1176-1184
[10]   UNIVERSALLY UTILITY-MAXIMIZING PRIVACY MECHANISMS [J].
Ghosh, Arpita ;
Roughgarden, Tim ;
Sundararajan, Mukund .
SIAM JOURNAL ON COMPUTING, 2012, 41 (06) :1673-1693