Comparing Local and Central Differential Privacy Using Membership Inference Attacks

被引:15
作者
Bernau, Daniel [1 ]
Robl, Jonas [1 ]
Grassal, Philip W. [2 ]
Schneider, Steffen [3 ]
Kerschbaum, Florian [4 ]
机构
[1] SAP, Karlsruhe, Germany
[2] Heidelberg Univ, Heidelberg, Germany
[3] ProcureAI, London, England
[4] Univ Waterloo, Waterloo, ON, Canada
来源
DATA AND APPLICATIONS SECURITY AND PRIVACY XXXV | 2021年 / 12840卷
基金
欧盟地平线“2020”;
关键词
Anonymization; Membership inference; Neural networks;
D O I
10.1007/978-3-030-81242-3_2
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Attacks that aim to identify the training data of neural networks represent a severe threat to the privacy of individuals in the training dataset. A possible protection is offered by anonymization of the training data or training function with differential privacy. However, data scientists can choose between local and central differential privacy, and need to select meaningful privacy parameters epsilon. A comparison of local and central differential privacy based on the privacy parameters furthermore potentially leads data scientists to incorrect conclusions, since the privacy parameters are reflecting different types of mechanisms. Instead, we empirically compare the relative privacy-accuracy trade-off of one central and two local differential privacy mechanisms under a white-box membership inference attack. While membership inference only reflects a lower bound on inference risk and differential privacy formulates an upper bound, our experiments with several datasets show that the privacy-accuracy trade-off is similar for both types of mechanisms despite the large difference in their upper bound. This suggests that the upper bound is far from the practical susceptibility to membership inference. Thus, small epsilon in central differential privacy and large epsilon in local differential privacy result in similar membership inference risks, and local differential privacy can be a meaningful alternative to central differential privacy for differentially private deep learning besides the comparatively higher privacy parameters.
引用
收藏
页码:22 / 42
页数:21
相关论文
共 44 条
  • [1] Deep Learning with Differential Privacy
    Abadi, Martin
    Chu, Andy
    Goodfellow, Ian
    McMahan, H. Brendan
    Mironov, Ilya
    Talwar, Kunal
    Zhang, Li
    [J]. CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, : 308 - 318
  • [2] An Economic Analysis of Privacy Protection and Statistical Accuracy as Social Choices
    Abowd, John M.
    Schmutte, Ian M.
    [J]. AMERICAN ECONOMIC REVIEW, 2019, 109 (01) : 171 - 202
  • [3] [Anonymous], 2018, INT C LEARNING REPRE
  • [4] [Anonymous], 2012, K HILL TARGET FIGURE
  • [5] [Anonymous], 2017, BBC News
  • [6] Reliable Third-Party Library Detection in Android and its Security Applications
    Backes, Michael
    Bugiel, Sven
    Derr, Erik
    [J]. CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, : 356 - 367
  • [7] Bassily R., 2014, P S FDN COMP SCI FOC
  • [8] Carlini N., 2018, CORR
  • [9] Davis J., 2006, RELATIONSHIP PRECISI
  • [10] Dwork C, 2006, LECT NOTES COMPUT SC, V4052, P1