GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models

被引:163
作者
Chen, Dingfan [1 ]
Yu, Ning [2 ,3 ]
Zhang, Yang [1 ]
Fritz, Mario [1 ]
机构
[1] CISPA Helmholtz Ctr Informat Secur, Saarbrucken, Germany
[2] Univ Maryland, College Pk, MD 20742 USA
[3] Max Planck Inst Informat, Saarbrucken, Germany
来源
CCS '20: PROCEEDINGS OF THE 2020 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY | 2020年
关键词
Membership inference attacks; deep learning; generative models; privacy-preserving machine learning;
D O I
10.1145/3372297.3417238
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning has achieved overwhelming success, spanning from discriminative models to generative models. In particular, deep generative models have facilitated a new level of performance in a myriad of areas, ranging from media manipulation to sanitized dataset generation. Despite the great success, the potential risks of privacy breach caused by generative models have not been analyzed systematically. In this paper, we focus on membership inference attack against deep generative models that reveals information about the training data used for victim models. Specifically, we present the first taxonomy of membership inference attacks, encompassing not only existing attacks but also our novel ones. In addition, we propose the first generic attack model that can be instantiated in a large range of settings and is applicable to various kinds of deep generative models. Moreover, we provide a theoretically grounded attack calibration technique, which consistently boosts the attack performance in all cases, across different attack settings, data modalities, and training configurations. We complement the systematic analysis of attack performance by a comprehensive experimental study, that investigates the effectiveness of various attacks w.r.t. model type and training configurations, over three diverse application scenarios (i.e., images, medical data, and location data).
引用
收藏
页码:343 / 362
页数:20
相关论文
共 73 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]   Differentially Private Mixture of Generative Neural Networks [J].
Acs, Gergely ;
Melis, Luca ;
Castelluccia, Claude ;
De Cristofaro, Emiliano .
2017 17TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2017, :715-720
[3]  
Aila T., 2018, INT C LEARN REPR ICL
[4]  
[Anonymous], 2014, ARXIV14091556
[5]  
Arjovsky M, 2017, PR MACH LEARN RES, V70
[6]  
Arora S, 2017, PR MACH LEARN RES, V70
[7]   walk2friends: Inferring Social Links from Mobility Profiles [J].
Backes, Michael ;
Humbert, Mathias ;
Pang, Jun ;
Zhang, Yang .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :1943-1957
[8]  
Backes Michael, 2016, P 2016 ACM SIGSAC C, P319, DOI 10.1145/2976749.2978355
[9]   Privacy-Preserving Generative Deep Neural Networks Support Clinical Data Sharing [J].
Beaulieu-Jones, Brett K. ;
Wu, Zhiwei Steven ;
Williams, Chris ;
Lee, Ran ;
Bhavnani, Sanjeev P. ;
Byrd, James Brian ;
Greene, Casey S. .
CIRCULATION-CARDIOVASCULAR QUALITY AND OUTCOMES, 2019, 12 (07)
[10]  
Bhattacharyya Apratim, 2019, ABS190912598 CORR