Attribute-Based Membership Inference Attacks and Defenses on GANs

被引:1
|
作者
Sun, Hui [1 ]
Zhu, Tianqing [2 ]
Li, Jie [1 ]
Ji, Shoulin [3 ]
Zhou, Wanlei [4 ]
机构
[1] China Univ Geosci, Wuhan 430079, Hubei, Peoples R China
[2] Univ Technol Sydney, Sydney, NSW 2007, Australia
[3] Zhejiang Univ, Hangzhou 310027, Zhejiang, Peoples R China
[4] City Univ Macau, Taipa, Macao, Peoples R China
关键词
Training; Image reconstruction; Generators; Generative adversarial networks; Codes; Privacy; Training data; Membership inference attack; generative adversarial networks; privacy leakage;
D O I
10.1109/TDSC.2023.3305591
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
With breakthroughs in high-resolution image generation, applications for disentangled generative adversarial networks (GANs) have attracted much attention. At the same time, the privacy issues associated with GAN models have been raising many concerns. Membership inference attacks (MIAs), where an adversary attempts to determine whether or not a sample has been used to train the victim model, are a major risk with GANs. In prior research, scholars have shown that successful MIAs can be mounted by leveraging overfit images. However, high-resolution images make the existing MIAs fail due to their complexity. And the nature of disentangled GANs is such that the attributes are overfitting, which means that, for an MIA to be successful, it must likely be based on overfitting attributes. Furthermore, given the empirical difficulties with obtaining independent and identically distributed (IID) candidate samples, choosing the non-trivial attributes of candidate samples as the target for exploring overfitting would be a more preferable choice. Hence, in this article, we propose a series of attribute-based MIAs that considers both black-box and white-box settings. The attacks are performed on the generator, and the inferences are derived by overfitting the non-trivial attributes. Additionally, we put forward a novel perspective on model generalization and a possible defense by evaluating the overfitting status of each individual attribute. A series of empirical evaluations in both settings demonstrate that the attacks remain stable and successful when using non-IID candidate samples. Further experiments illustrate that each attribute exhibits a distinct overfitting status. Moreover, manually generalizing highly overfitting attributes significantly reduces the risk of privacy leaks.
引用
收藏
页码:2376 / 2393
页数:18
相关论文
共 50 条
  • [21] Multi-level membership inference attacks in federated Learning based on active GAN
    Hao Sui
    Xiaobing Sun
    Jiale Zhang
    Bing Chen
    Wenjuan Li
    Neural Computing and Applications, 2023, 35 : 17013 - 17027
  • [22] TOWARDS MODEL QUANTIZATION ON THE RESILIENCE AGAINST MEMBERSHIP INFERENCE ATTACKS
    Kowalski, Charles
    Famili, Azadeh
    Lao, Yingjie
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 3646 - 3650
  • [23] TransMIA: Membership Inference Attacks Using Transfer Shadow Training
    Hidano, Seira
    Murakami, Takao
    Kawamoto, Yusuke
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [24] Comparative Analysis of Membership Inference Attacks in Federated and Centralized Learning
    Abbasi Tadi, Ali
    Dayal, Saroj
    Alhadidi, Dima
    Mohammed, Noman
    INFORMATION, 2023, 14 (11)
  • [25] Membership Inference Attacks on Aggregated Time Series with Linear Programming
    Voyez, Antonin
    Allard, Tristan
    Avoine, Gildas
    Cauchois, Pierre
    Fromont, Elisa
    Simonin, Matthieu
    SECRYPT : PROCEEDINGS OF THE 19TH INTERNATIONAL CONFERENCE ON SECURITY AND CRYPTOGRAPHY, 2022, : 193 - 204
  • [26] Attribute-Based Double Constraint Denoising Network for Seismic Data
    Wang, Shengnan
    Li, Yue
    Wu, Ning
    Zhao, Yuxing
    Yao, Haiyang
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2021, 59 (06): : 5304 - 5316
  • [27] Do Backdoors Assist Membership Inference Attacks?
    Goto, Yumeki
    Ashizawa, Nami
    Shibahara, Toshiki
    Yanai, Naoto
    SECURITY AND PRIVACY IN COMMUNICATION NETWORKS, PT II, SECURECOMM 2023, 2025, 568 : 251 - 265
  • [28] Membership Inference Attacks on Machine Learning: A Survey
    Hu, Hongsheng
    Salcic, Zoran
    Sun, Lichao
    Dobbie, Gillian
    Yu, Philip S.
    Zhang, Xuyun
    ACM COMPUTING SURVEYS, 2022, 54 (11S)
  • [29] Membership Inference Attacks Against the Graph Classification
    Yang, Junze
    Li, Hongwei
    Fan, Wenshu
    Zhang, Xilin
    Hao, Meng
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 6729 - 6734
  • [30] Detection of Membership Inference Attacks on GAN Models
    Ekramifard, Ala
    Amintoosi, Haleh
    Seno, Seyed Amin Hosseini
    ISECURE-ISC INTERNATIONAL JOURNAL OF INFORMATION SECURITY, 2025, 17 (01): : 43 - 57