Deriving an Optimal Noise Adding Mechanism for Privacy-Preserving Machine Learning

被引:13
作者
Kumar, Mohit [1 ,2 ]
Rossbory, Michael [2 ]
Moser, Bernhard A. [2 ]
Freudenthaler, Bernhard [2 ]
机构
[1] Univ Rostock, Fac Comp Sci & Elect Engn, Rostock, Germany
[2] Software Competence Ctr Hagenberg, Hagenberg, Austria
来源
DATABASE AND EXPERT SYSTEMS APPLICATIONS (DEXA 2019) | 2019年 / 1062卷
基金
欧盟地平线“2020”;
关键词
Privacy; Noise adding mechanism; Machine learning;
D O I
10.1007/978-3-030-27684-3_15
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Differential privacy is a standard mathematical framework to quantify the degree to which individual privacy in a statistical dataset is preserved. We derive an optimal (epsilon, delta)-differentially private noise adding mechanism for real-valued data matrices meant for the training of models by machine learning algorithms. The aim is to protect a machine learning algorithm from an adversary who seeks to gain an information about the data from algorithm's output by perturbing the value in a sample of the training data. The fundamental issue of trade-off between privacy and utility is addressed by presenting a novel approach consisting of three steps: (1) the sufficient conditions on the probability density function of noise for (epsilon, delta)-differential privacy of a machine learning algorithm are derived; (2) the noise distribution that, for a given level of entropy, minimizes the expected noise magnitude is derived; (3) using entropy level as the design parameter, the optimal entropy level and the corresponding probability density function of the noise are derived.
引用
收藏
页码:108 / 118
页数:11
相关论文
共 50 条
[11]   A Review of Privacy-Preserving Machine Learning Classification [J].
Wang, Andy ;
Wang, Chen ;
Bi, Meng ;
Xu, Jian .
CLOUD COMPUTING AND SECURITY, PT IV, 2018, 11066 :671-682
[12]   Privacy-Preserving Machine Learning in Life Insurance Risk Prediction [J].
Pereira, Klismam ;
Vinagre, Joao ;
Alonso, Ana Nunes ;
Coelho, Fabio ;
Carvalho, Melania .
MACHINE LEARNING AND PRINCIPLES AND PRACTICE OF KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2022, PT II, 2023, 1753 :44-52
[13]   GuardianML: Anatomy of Privacy-Preserving Machine Learning Techniques and Frameworks [J].
Njungle, Nges Brian ;
Jahns, Eric ;
Wu, Zhenqi ;
Mastromauro, Luigi ;
Stojkov, Milan ;
Kinsy, Michel A. .
IEEE ACCESS, 2025, 13 :61483-61510
[14]   Efficient Privacy-Preserving Machine Learning in Hierarchical Distributed System [J].
Jia, Qi ;
Guo, Linke ;
Fang, Yuguang ;
Wang, Guirong .
IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2019, 6 (04) :599-612
[15]   Staged Noise Perturbation for Privacy-Preserving Federated Learning [J].
Li, Zhe ;
Chen, Honglong ;
Gao, Yudong ;
Ni, Zhichen ;
Xue, Huansheng ;
Shao, Huajie .
IEEE TRANSACTIONS ON SUSTAINABLE COMPUTING, 2024, 9 (06) :936-947
[16]   Privacy-Preserving Deep Learning on Machine Learning as a Service-a Comprehensive Survey [J].
Tanuwidjaja, Harry Chandra ;
Choi, Rakyong ;
Baek, Seunggeun ;
Kim, Kwangjo .
IEEE ACCESS, 2020, 8 (08) :167425-167447
[17]   Privacy-Preserving Machine Learning Based on Cryptography: A Survey [J].
Chen, Congcong ;
Wei, Lifei ;
Xie, Jintao ;
Shi, Yang .
ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2025, 19 (04)
[18]   Privacy-preserving machine learning with multiple data providers [J].
Li, Ping ;
Li, Tong ;
Ye, Heng ;
Li, Jin ;
Chen, Xiaofeng ;
Xiang, Yang .
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2018, 87 :341-350
[19]   A Distributed Trust Framework for Privacy-Preserving Machine Learning [J].
Abramson, Will ;
Hall, Adam James ;
Papadopoulos, Pavlos ;
Pitropakis, Nikolaos ;
Buchanan, William J. .
TRUST, PRIVACY AND SECURITY IN DIGITAL BUSINESS, TRUSTBUS 2020, 2020, 12395 :205-220
[20]   Practical Secure Aggregation for Privacy-Preserving Machine Learning [J].
Bonawitz, Keith ;
Ivanov, Vladimir ;
Kreuter, Ben ;
Marcedone, Antonio ;
McMahan, H. Brendan ;
Patel, Sarvar ;
Ramage, Daniel ;
Segal, Aaron ;
Seth, Karn .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :1175-1191