Defending Against Membership Inference Attack by Shielding Membership Signals

被引:2
作者
Miao, Yinbin [1 ]
Yu, Yueming [1 ]
Li, Xinghua [2 ]
Guo, Yu [3 ]
Liu, Ximeng [4 ]
Choo, Kim-Kwang Raymond [5 ]
Deng, Robert H. [6 ]
机构
[1] Xidian Univ, Sch Cyber Engn, Xian 710071, Peoples R China
[2] Xidian Univ, Engn Res Ctr Big data Secur, Sch Cyber Engn, State Key Lab Integrated Serv Networks,Minist Educ, Xian 710071, Peoples R China
[3] Beijing Normal Univ, Sch Artificial Intelligence, Beijing 100091, Peoples R China
[4] Fuzhou Univ, Sch Math & Comp Sci, Key Lab Informat Secur Network Syst, Fuzhou 350108, Peoples R China
[5] Univ Texas San Antonio, Dept Informat Syst & Cyber Secur, San Antonio, TX 78249 USA
[6] Singapore Management Univ, Sch Informat Syst, Singapore 178902, Singapore
基金
中国国家自然科学基金;
关键词
Machine learning; Member inference attack; Multi-model ensemble framework; Membership signals;
D O I
10.1109/TSC.2023.3309336
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Member Inference Attack (MIA) is a key measure for evaluating privacy leakage in Machine Learning (ML) models, aiming to distinguish private members from non-members by training the attack model. In addition to the traditional MIA, the recently proposed Generative Adversarial Network (GAN)-based MIA can help the adversary know the distribution of the victim's private dataset, thereby significantly improving attack accuracy. For traditional attacks and this new type of attack, previous defense schemes cannot handle the trade-off between privacy and utility well. To this end, we propose a defense solution using multi-model ensemble framework. Specifically, we train multiple submodels to hide membership signals and resist MIA, achieving reduced privacy leakage while guaranteeing the effectiveness of the target model. Our security analysis shows that our scheme can provide privacy protection while preserving model utility. Experimental results on widely used datasets show that our scheme can effectively resist MIAs with negligible utility loss.
引用
收藏
页码:4087 / 4101
页数:15
相关论文
共 39 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]  
[Anonymous], 2009, CIFAR-10 (canadian institute for advanced research)
[3]  
Backes Michael, 2016, P 2016 ACM SIGSAC C, P319
[4]   GANMIA: GAN-based Black-box Membership Inference Attack [J].
Bai, Yang ;
Chen, Degang ;
Chen, Ting ;
Fan, Mingyu .
IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2021), 2021,
[5]   Random forests [J].
Breiman, L .
MACHINE LEARNING, 2001, 45 (01) :5-32
[6]  
Choquette-Choo CA, 2021, PR MACH LEARN RES, V139
[7]   Generative Adversarial Networks [J].
Goodfellow, Ian ;
Pouget-Abadie, Jean ;
Mirza, Mehdi ;
Xu, Bing ;
Warde-Farley, David ;
Ozair, Sherjil ;
Courville, Aaron ;
Bengio, Yoshua .
COMMUNICATIONS OF THE ACM, 2020, 63 (11) :139-144
[8]   MBeacon: Privacy-Preserving Beacons for DNA Methylation Data [J].
Hagestedt, Inken ;
Zhang, Yang ;
Humbert, Mathias ;
Berrang, Pascal ;
Tang, Haixu ;
Wang, XiaoFeng ;
Backes, Michael .
26TH ANNUAL NETWORK AND DISTRIBUTED SYSTEM SECURITY SYMPOSIUM (NDSS 2019), 2019,
[9]   Resolving Individuals Contributing Trace Amounts of DNA to Highly Complex Mixtures Using High-Density SNP Genotyping Microarrays [J].
Homer, Nils ;
Szelinger, Szabolcs ;
Redman, Margot ;
Duggan, David ;
Tembe, Waibhav ;
Muehling, Jill ;
Pearson, John V. ;
Stephan, Dietrich A. ;
Nelson, Stanley F. ;
Craig, David W. .
PLOS GENETICS, 2008, 4 (08)
[10]  
Humphries T, 2023, Arxiv, DOI arXiv:2010.12112