Privacy Risks of Securing Machine Learning Models against Adversarial Examples

被引:120
作者
Song, Liwei [1 ]
Shokri, Reza [2 ]
Mittal, Prateek [1 ]
机构
[1] Princeton Univ, Princeton, NJ 08544 USA
[2] Natl Univ Singapore, Singapore, Singapore
来源
PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19) | 2019年
基金
美国国家科学基金会; 新加坡国家研究基金会;
关键词
machine learning; membership inference attacks; adversarial examples and defenses; DEEP NEURAL-NETWORKS; FACE RECOGNITION;
D O I
10.1145/3319535.3354211
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The arms race between attacks and defenses for machine learning models has come to a forefront in recent years, in both the security community and the privacy community. However, one big limitation of previous research is that the security domain and the privacy domain have typically been considered separately. It is thus unclear whether the defense methods in one domain will have any unexpected impact on the other domain. In this paper, we take a step towards resolving this limitation by combining the two domains. In particular, we measure the success of membership inference attacks against six state-of-the-art defense methods that mitigate the risk of adversarial examples (i.e., evasion attacks). Membership inference attacks determine whether or not an individual data record has been part of a model's training set. The accuracy of such attacks reflects the information leakage of training algorithms about individual members of the training set. Adversarial defense methods against adversarial examples influence the model's decision boundaries such that model predictions remain unchanged for a small area around each input. However, this objective is optimized on training data. Thus, individual data records in the training set have a significant influence on robust models. This makes the models more vulnerable to inference attacks. To perform the membership inference attacks, we leverage the existing inference methods that exploit model predictions. We also propose two new inference methods that exploit structural properties of robust models on adversarially perturbed data. Our experimental evaluation demonstrates that compared with the natural training (undefended) approach, adversarial defense methods can indeed increase the target model's risk against membership inference attacks. When using adversarial defenses to train the robust models, the membership inference advantage increases by up to 4.5 times compared to the naturally undefended models. Beyond revealing the privacy risks of adversarial defenses, we further investigate the factors, such as model capacity, that influence the membership information leakage.
引用
收藏
页码:241 / 257
页数:17
相关论文
共 66 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]  
Andor D, 2016, PROCEEDINGS OF THE 54TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1, P2442
[3]  
[Anonymous], IEEE S SEC PRIV S P
[4]  
[Anonymous], 2019, NETW DISTR SYST SEC
[5]  
[Anonymous], IEEE S SEC PRIV S P
[6]  
[Anonymous], 2013, PUBLIC DOMAIN DATASE
[7]  
[Anonymous], 2012, P 29 INT COFERENCE I
[8]  
[Anonymous], NEURIPS WORKSH SEC M
[9]  
[Anonymous], IEEE S SEC PRIV S P
[10]  
[Anonymous], 2018, ACM C COMP COMM SEC