DP-UTIL: Comprehensive Utility Analysis of Differential Privacy in Machine Learning

被引:4
作者
Jarin, Ismat [1 ]
Eshete, Birhanu [1 ]
机构
[1] Univ Michigan, Dearborn, MI 48128 USA
来源
CODASPY'22: PROCEEDINGS OF THE TWELVETH ACM CONFERENCE ON DATA AND APPLICATION SECURITY AND PRIVACY | 2022年
关键词
Differential Privacy; Machine Learning; Membership Inference Attack;
D O I
10.1145/3508398.3511513
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Differential Privacy (DP) has emerged as a rigorous formalism to quantify privacy protection provided by an algorithm that operates on privacy sensitive data. In machine learning (ML), DP has been employed to limit inference/disclosure of training examples. Prior work leveraged DP across the ML pipeline, albeit in isolation, often focusing on mechanisms such as gradient perturbation. In this paper, we present DP-UTIL, a holistic utility analysis framework of DP across the ML pipeline with focus on input perturbation, objective perturbation, gradient perturbation, output perturbation, and prediction perturbation. Given an ML task on privacy-sensitive data, DP-UTIL enables a ML privacy practitioner to perform holistic comparative analysis on the impact of DP in these five perturbation spots, measured in terms of model utility loss, privacy leakage, and the number of truly revealed training samples. We evaluate DP-UTIL over classification tasks on vision, medical, and financial datasets, using two representative learning algorithms (logistic regression and deep neural network) against membership inference attack as a case study attack. One of the highlights of our results is that prediction perturbation consistently achieves the lowest utility loss on all models across all datasets. In logistic regression models, objective perturbation results in lowest privacy leakage compared to other perturbation techniques. For deep neural networks, gradient perturbation results in lowest privacy leakage. Moreover, our results on true revealed records suggest that as privacy leakage increases a differentially private model reveals a greater number of member samples. Overall, our findings suggest that to make informed decisions as to the choice of perturbation mechanisms, a ML privacy practitioner needs to examine the dynamics among optimization techniques (convex vs. non-convex), number of classes, and privacy budget.
引用
收藏
页码:41 / 52
页数:12
相关论文
共 35 条
  • [1] Deep Learning with Differential Privacy
    Abadi, Martin
    Chu, Andy
    Goodfellow, Ian
    McMahan, H. Brendan
    Mironov, Ilya
    Talwar, Kunal
    Zhang, Li
    [J]. CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, : 308 - 318
  • [2] Concentrated Differential Privacy: Simplifications, Extensions, and Lower Bounds
    Bun, Mark
    Steinke, Thomas
    [J]. THEORY OF CRYPTOGRAPHY, TCC 2016-B, PT I, 2016, 9985 : 635 - 658
  • [3] Carlini N, 2019, PROCEEDINGS OF THE 28TH USENIX SECURITY SYMPOSIUM, P267
  • [4] Chaudhuri K., 2008, Adv. Neural Inf. Process. Syst., V21, P289
  • [5] Chaudhuri K, 2011, J MACH LEARN RES, V12, P1069
  • [6] Local Privacy and Statistical Minimax Rates
    Duchi, John C.
    Jordan, Michael I.
    Wainwright, Martin J.
    [J]. 2013 IEEE 54TH ANNUAL SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS), 2013, : 429 - 438
  • [7] Calibrating noise to sensitivity in private data analysis
    Dwork, Cynthia
    McSherry, Frank
    Nissim, Kobbi
    Smith, Adam
    [J]. THEORY OF CRYPTOGRAPHY, PROCEEDINGS, 2006, 3876 : 265 - 284
  • [8] The Algorithmic Foundations of Differential Privacy
    Dwork, Cynthia
    Roth, Aaron
    [J]. FOUNDATIONS AND TRENDS IN THEORETICAL COMPUTER SCIENCE, 2013, 9 (3-4): : 211 - 406
  • [9] Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures
    Fredrikson, Matt
    Jha, Somesh
    Ristenpart, Thomas
    [J]. CCS'15: PROCEEDINGS OF THE 22ND ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2015, : 1322 - 1333
  • [10] Differentially Private Empirical Risk Minimization with Input Perturbation
    Fukuchi, Kazuto
    Quang Khai Tran
    Sakuma, Jun
    [J]. DISCOVERY SCIENCE, DS 2017, 2017, 10558 : 82 - 90