Attribute-Based Membership Inference Attacks and Defenses on GANs

被引:1
|
作者
Sun, Hui [1 ]
Zhu, Tianqing [2 ]
Li, Jie [1 ]
Ji, Shoulin [3 ]
Zhou, Wanlei [4 ]
机构
[1] China Univ Geosci, Wuhan 430079, Hubei, Peoples R China
[2] Univ Technol Sydney, Sydney, NSW 2007, Australia
[3] Zhejiang Univ, Hangzhou 310027, Zhejiang, Peoples R China
[4] City Univ Macau, Taipa, Macao, Peoples R China
关键词
Training; Image reconstruction; Generators; Generative adversarial networks; Codes; Privacy; Training data; Membership inference attack; generative adversarial networks; privacy leakage;
D O I
10.1109/TDSC.2023.3305591
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
With breakthroughs in high-resolution image generation, applications for disentangled generative adversarial networks (GANs) have attracted much attention. At the same time, the privacy issues associated with GAN models have been raising many concerns. Membership inference attacks (MIAs), where an adversary attempts to determine whether or not a sample has been used to train the victim model, are a major risk with GANs. In prior research, scholars have shown that successful MIAs can be mounted by leveraging overfit images. However, high-resolution images make the existing MIAs fail due to their complexity. And the nature of disentangled GANs is such that the attributes are overfitting, which means that, for an MIA to be successful, it must likely be based on overfitting attributes. Furthermore, given the empirical difficulties with obtaining independent and identically distributed (IID) candidate samples, choosing the non-trivial attributes of candidate samples as the target for exploring overfitting would be a more preferable choice. Hence, in this article, we propose a series of attribute-based MIAs that considers both black-box and white-box settings. The attacks are performed on the generator, and the inferences are derived by overfitting the non-trivial attributes. Additionally, we put forward a novel perspective on model generalization and a possible defense by evaluating the overfitting status of each individual attribute. A series of empirical evaluations in both settings demonstrate that the attacks remain stable and successful when using non-IID candidate samples. Further experiments illustrate that each attribute exhibits a distinct overfitting status. Moreover, manually generalizing highly overfitting attributes significantly reduces the risk of privacy leaks.
引用
收藏
页码:2376 / 2393
页数:18
相关论文
共 50 条
  • [31] Membership inference attacks against compression models
    Jin, Yong
    Lou, Weidong
    Gao, Yanghua
    COMPUTING, 2023, 105 (11) : 2419 - 2442
  • [32] Enhance membership inference attacks in federated learning
    He, Xinlong
    Xu, Yang
    Zhang, Sicong
    Xu, Weida
    Yan, Jiale
    COMPUTERS & SECURITY, 2024, 136
  • [33] Black-box membership inference attacks based on shadow model
    Zhen, Han
    Wen’An, Zhou
    Xiaoxuan, Han
    Jie, Wu
    Journal of China Universities of Posts and Telecommunications, 2024, 31 (04): : 1 - 16
  • [34] Membership Inference Attacks Against Recommender Systems
    Zhang, Minxing
    Ren, Zhaochun
    Wang, Zihan
    Ren, Pengjie
    Chen, Zhumin
    Hu, Pengfei
    Zhang, Yang
    CCS '21: PROCEEDINGS OF THE 2021 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2021, : 864 - 879
  • [35] Membership inference attacks against compression models
    Yong Jin
    Weidong Lou
    Yanghua Gao
    Computing, 2023, 105 : 2419 - 2442
  • [36] GradDiff: Gradient-based membership inference attacks against federated distillation with differential comparison
    Wang, Xiaodong
    Wu, Longfei
    Guan, Zhitao
    INFORMATION SCIENCES, 2024, 658
  • [37] LoDen: Making Every Client in Federated Learning a Defender Against the Poisoning Membership Inference Attacks
    Ma, Mengyao
    Zhang, Yanjun
    Chamikara, M. A. P.
    Zhang, Leo Yu
    Chhetri, Mohan Baruwal
    Bai, Guangdong
    PROCEEDINGS OF THE 2023 ACM ASIA CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, ASIA CCS 2023, 2023, : 122 - 135
  • [38] Synthetic image learning: Preserving performance and preventing Membership Inference Attacks
    Lomurno, Eugenio
    Matteucci, Matteo
    PATTERN RECOGNITION LETTERS, 2025, 190 : 52 - 58
  • [39] PAR-GAN: Improving the Generalization of Generative Adversarial Networks Against Membership Inference Attacks
    Chen, Junjie
    Wang, Wendy Hui
    Gao, Hongchang
    Shi, Xinghua
    KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 127 - 137
  • [40] Hierarchical Attribute-Based Signatures
    Dragan, Constantin-Catalin
    Gardham, Daniel
    Manulis, Mark
    CRYPTOLOGY AND NETWORK SECURITY, CANS 2018, 2018, 11124 : 213 - 234