Defending Against Membership Inference Attacks With High Utility by GAN

被引:14
作者
Hu, Li [1 ]
Li, Jin [1 ]
Lin, Guanbiao [1 ]
Peng, Shiyu [1 ]
Zhang, Zhenxin [1 ]
Zhang, Yingying [1 ]
Dong, Changyu [2 ]
机构
[1] Guangzhou Univ, Inst Artificial Intelligence & Blockchain, Guangzhou, Guangdong, Peoples R China
[2] Newcastle Univ, Sch Comp, Newcastle Upon Tyne NE1 7RU, England
基金
中国国家自然科学基金;
关键词
Data models; Training; Generative adversarial networks; Privacy; Machine learning; Computational modeling; Training data; Membership inference attack; generative adversarial network; machine learning; privacy;
D O I
10.1109/TDSC.2022.3174569
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The success of machine learning (ML) depends on the availability of large-scale datasets. However, recent studies have shown that models trained on such datasets are vulnerable to privacy attacks, among which membership inference attack (MIA) brings serious privacy risk. MIA allows an adversary to infer whether a sample belongs to the training dataset of the target model or not. Though a variety of defenses against MIA have been proposed such as differential privacy and adversarial regularization, they also result in lower model accuracy and thus make the models less unusable. In this article, aiming at maintaining the accuracy while protecting the privacy against MIA, we propose a new defense against membership inference attacks by generative adversarial network (GAN). Specifically, sensitive data is used to train a GAN, then the GAN generate the data for training the actual model. To ensure that the model trained with GAN on small datasets can has high utility, two different GAN structures with special training techniques are utilized to deal with the image data and table data, respectively. Experiment results show that the defense is more effective on different data sets against the existing attack schemes, and is more efficient compared with most advanced MIA defenses.
引用
收藏
页码:2144 / 2157
页数:14
相关论文
共 45 条
  • [1] Deep Learning with Differential Privacy
    Abadi, Martin
    Chu, Andy
    Goodfellow, Ian
    McMahan, H. Brendan
    Mironov, Ilya
    Talwar, Kunal
    Zhang, Li
    [J]. CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, : 308 - 318
  • [2] Deep Learning for Prediction of Obstructive Disease From Fast Myocardial Perfusion SPECT A Multicenter Study
    Betancur, Julian
    Commandeur, Frederic
    Motlagh, Mahsaw
    Sharir, Tali
    Einstein, Andrew J.
    Bokhari, Sabahat
    Fish, Mathews B.
    Ruddy, Terrence D.
    Kaufmann, Philipp
    Sinusas, Albert J.
    Miller, Edward J.
    Bateman, Timothy M.
    Dorbala, Sharmila
    Di Carli, Marcelo
    Germano, Guido
    Otaki, Yuka
    Tamarappoo, Balaji K.
    Dey, Damini
    Berman, Daniel S.
    Slomka, Piotr J.
    [J]. JACC-CARDIOVASCULAR IMAGING, 2018, 11 (11) : 1654 - 1663
  • [3] 2019, Arxiv, DOI arXiv:1802.08232
  • [4] Chen JJ, 2021, PACIFIC SYMPOSIUM ON BICOMPUTING 2021, P26
  • [5] Individualized patient-centered lifestyle recommendations: An expert system for communicating patient specific cardiovascular risk information and prioritizing lifestyle options
    Chi, Chih-Lin
    Street, W. Nick
    Robinson, Jennifer G.
    Crawford, Matthew A.
    [J]. JOURNAL OF BIOMEDICAL INFORMATICS, 2012, 45 (06) : 1164 - 1174
  • [6] Choquette-Choo CA, 2021, PR MACH LEARN RES, V139
  • [7] Crowley Elliot J., 2018, Proceedings of the 32nd International Conference on Neural Information Processing Systems, P2893
  • [8] Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures
    Fredrikson, Matt
    Jha, Somesh
    Ristenpart, Thomas
    [J]. CCS'15: PROCEEDINGS OF THE 22ND ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2015, : 1322 - 1333
  • [9] 基于迁移学习的敏感数据隐私保护方法
    付玉香
    秦永彬
    申国伟
    [J]. 数据采集与处理, 2019, 34 (03) : 422 - 431
  • [10] [付玉香 Fu Yuxiang], 2019, [计算机工程与科学, Computer Engineering and Science], V41, P641