SNAP: Efficient Extraction of Private Properties with Poisoning

被引:12
作者
Chaudhari, Harsh [1 ]
Abascal, John [1 ]
Oprea, Alina [1 ]
Jagielski, Matthew [2 ]
Tramer, Florian [3 ]
Ullman, Jonathan [1 ]
机构
[1] Northeastern Univ, Boston, MA 02115 USA
[2] Google Res, Mountain View, CA USA
[3] ETH, Zurich, Switzerland
来源
2023 IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP | 2023年
关键词
D O I
10.1109/SP46215.2023.10179334
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Property inference attacks allow an adversary to extract global properties of the training dataset from a machine learning model. Such attacks have privacy implications for data owners sharing their datasets to train machine learning models. Several existing approaches for property inference attacks against deep neural networks have been proposed [1]-[3], but they all rely on the attacker training a large number of shadow models, which induces a large computational overhead. In this paper, we consider the setting of property inference attacks in which the attacker can poison a subset of the training dataset and query the trained target model. Motivated by our theoretical analysis of model confidences under poisoning, we design an efficient property inference attack, SNAP, which obtains higher attack success and requires lower amounts of poisoning than the state-of-the-art poisoning-based property inference attack by Mahloujifar et al. [3]. For example, on the Census dataset, SNAP achieves 34% higher success rate than [3] while being 56.5x faster. We also extend our attack to infer whether a certain property was present at all during training and estimate the exact proportion of a property of interest efficiently. We evaluate our attack on several properties of varying proportions from four datasets and demonstrate SNAP's generality and effectiveness.
引用
收藏
页码:400 / 417
页数:18
相关论文
共 46 条
[1]  
Ateniese Giuseppe, 2015, International Journal of Security and Networks, V10, P137
[2]  
Balle B., 2022, arXiv
[3]  
Biggio B., 2011, ASIAN C MACHINE LEAR, P97
[4]  
Biggio Battista, 2012, ICML, DOI DOI 10.48550/ARXIV.1206.6389
[5]  
Boenisch Franziska, 2021, CURIOUS ABANDON HONE
[6]  
Carlini N, 2022, P IEEE S SECUR PRIV, P1897, DOI [10.1109/SP46214.2022.00090, 10.1109/SP46214.2022.9833649]
[7]  
Carlini N, 2021, PROCEEDINGS OF THE 30TH USENIX SECURITY SYMPOSIUM, P2633
[8]  
Carlini N, 2019, PROCEEDINGS OF THE 28TH USENIX SECURITY SYMPOSIUM, P267
[9]  
Chen Xinyun, 2017, Targeted backdoor attacks on deep learning systems using data poisoning
[10]  
Choquette-Choo C. A., 2021, P 38 INT C MACHINE L