Investigating Membership Inference Attacks under Data Dependencies

被引:3
|
作者
Humphries, Thomas [1 ]
Oya, Simon [1 ]
Tulloch, Lindsey [1 ]
Rafuse, Matthew [1 ]
Goldberg, Ian [1 ]
Hengartner, Urs [1 ]
Kerschbaum, Florian [1 ]
机构
[1] Univ Waterloo, Waterloo, ON, Canada
来源
2023 IEEE 36TH COMPUTER SECURITY FOUNDATIONS SYMPOSIUM, CSF | 2023年
基金
加拿大自然科学与工程研究理事会;
关键词
Membership Inference Attacks; Differential Privacy; PRIVACY;
D O I
10.1109/CSF57540.2023.00013
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Training machine learning models on privacy-sensitive data has become a popular practice, driving innovation in ever-expanding fields. This has opened the door to new attacks that can have serious privacy implications. One such attack, the Membership Inference Attack (MIA), exposes whether or not a particular data point was used to train a model. A growing body of literature uses Differentially Private (DP) training algorithms as a defence against such attacks. However, these works evaluate the defence under the restrictive assumption that all members of the training set, as well as non-members, are independent and identically distributed. This assumption does not hold for many real-world use cases in the literature. Motivated by this, we evaluate membership inference with statistical dependencies among samples and explain why DP does not provide meaningful protection (the privacy parameter epsilon scales with the training set size n) in this more general case. We conduct a series of empirical evaluations with off-the-shelf MIAs using training sets built from real-world data showing different types of dependencies among samples. Our results reveal that training set dependencies can severely increase the performance of MIAs, and therefore assuming that data samples are statistically independent can significantly underestimate the performance of MIAs.
引用
收藏
页码:473 / 488
页数:16
相关论文
共 50 条
  • [41] Efficient Membership Inference Attacks against Federated Learning via Bias Differences
    Zhang, Liwei
    Li, Linghui
    Li, Xiaoyong
    Cai, Binsi
    Gao, Yali
    Dou, Ruobin
    Chen, Luying
    PROCEEDINGS OF THE 26TH INTERNATIONAL SYMPOSIUM ON RESEARCH IN ATTACKS, INTRUSIONS AND DEFENSES, RAID 2023, 2023, : 222 - 235
  • [42] Membership Inference Attacks against GANs by Leveraging Over-representation Regions
    Hu, Hailong
    Pang, Jun
    CCS '21: PROCEEDINGS OF THE 2021 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2021, : 2387 - 2389
  • [43] Deep Neural Network Quantization Framework for Effective Defense against Membership Inference Attacks
    Famili, Azadeh
    Lao, Yingjie
    SENSORS, 2023, 23 (18)
  • [44] Multi-level membership inference attacks in federated Learning based on active GAN
    Sui, Hao
    Sun, Xiaobing
    Zhang, Jiale
    Chen, Bing
    Li, Wenjuan
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (23) : 17013 - 17027
  • [45] Multi-level membership inference attacks in federated Learning based on active GAN
    Hao Sui
    Xiaobing Sun
    Jiale Zhang
    Bing Chen
    Wenjuan Li
    Neural Computing and Applications, 2023, 35 : 17013 - 17027
  • [46] mDARTS: Searching ML-Based ECG Classifiers Against Membership Inference Attacks
    Park, Eunbin
    Lee, Youngjoo
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2025, 29 (01) : 177 - 187
  • [47] Assessing Differentially Private Variational Autoencoders Under Membership Inference
    Bernau, Daniel
    Robl, Jonas
    Kerschbaum, Florian
    DATA AND APPLICATIONS SECURITY AND PRIVACY XXXVI, DBSEC 2022, 2022, 13383 : 3 - 14
  • [48] Effects of Differential Privacy and Data Skewness on Membership Inference Vulnerability
    Truex, Stacey
    Liu, Ling
    Gursoy, Mehmet Emre
    Wei, Wenqi
    Yu, Lei
    2019 FIRST IEEE INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS AND APPLICATIONS (TPS-ISA 2019), 2019, : 82 - 91
  • [49] LoDen: Making Every Client in Federated Learning a Defender Against the Poisoning Membership Inference Attacks
    Ma, Mengyao
    Zhang, Yanjun
    Chamikara, M. A. P.
    Zhang, Leo Yu
    Chhetri, Mohan Baruwal
    Bai, Guangdong
    PROCEEDINGS OF THE 2023 ACM ASIA CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, ASIA CCS 2023, 2023, : 122 - 135
  • [50] MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples
    Jia, Jinyuan
    Salem, Ahmed
    Backes, Michael
    Zhang, Yang
    Gong, Neil Zhenqiang
    PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, : 259 - 274