GanNoise: Defending against black-box membership inference attacks by countering noise generation

被引:2
|
作者
Liang, Jiaming [1 ]
Huang, Teng [1 ]
Luo, Zidan [1 ]
Li, Dan [1 ]
Li, Yunhao [1 ]
Ding, Ziyu [1 ]
机构
[1] Guangzhou Univ, Guangzhou, Peoples R China
来源
2023 INTERNATIONAL CONFERENCE ON DATA SECURITY AND PRIVACY PROTECTION, DSPP | 2023年
基金
中国国家自然科学基金;
关键词
data privacy; deep learning; MIA defense;
D O I
10.1109/DSPP58763.2023.10405019
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
In recent years, data privacy in deep learning has seen a notable surge of interest. Pretrained large-scale datadriven models are potential to be attacked risky with membership inference attacks. However, the current corresponding defenses to prevent the leak of data may reduce the performance of pre-trained models. In this paper, we propose a novel training framework called GanNoise that preserves privacy by maintaining the accuracy of classification tasks. Through utilizing adversarial regularization to train a noise generation model, we generate noise that adds randomness to private data during model training, effectively preventing excessive memorization of the actual training data. Our experimental results illustrate the efficacy of the framework against existing attack schemes on various datasets while outperforming advanced MIA defense solutions in terms of efficiency.
引用
收藏
页码:32 / 40
页数:9
相关论文
共 50 条
  • [21] Back in Black: A Comparative Evaluation of Recent State-Of-The-Art Black-Box Attacks
    Mahmood, Kaleel
    Mahmood, Rigel
    Rathbun, Ethan
    van Dijk, Marten
    IEEE ACCESS, 2022, 10 : 998 - 1019
  • [22] Automatic Selection Attacks Framework for Hard Label Black-Box Models
    Liu, Xiaolei
    Li, Xiaoyu
    Zheng, Desheng
    Bai, Jiayu
    Peng, Yu
    Zhang, Shibin
    IEEE INFOCOM 2022 - IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (INFOCOM WKSHPS), 2022,
  • [23] One Parameter Defense-Defending Against Data Inference Attacks via Differential Privacy
    Ye, Dayong
    Shen, Sheng
    Zhu, Tianqing
    Liu, Bo
    Zhou, Wanlei
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 : 1466 - 1480
  • [24] Black-box Evolutionary Search for Adversarial Examples against Deep Image Classifiers in Non-Targeted Attacks
    Prochazka, Stepan
    Neruda, Roman
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [25] DAMIA: Leveraging Domain Adaptation as a Defense Against Membership Inference Attacks
    Huang, Hongwei
    Luo, Weiqi
    Zeng, Guoqiang
    Weng, Jian
    Zhang, Yue
    Yang, Anjia
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2022, 19 (05) : 3183 - 3199
  • [26] EAR: An Enhanced Adversarial Regularization Approach against Membership Inference Attacks
    Hu, Hongsheng
    Salcic, Zoran
    Dobbie, Gillian
    Chen, Yi
    Zhang, Xuyun
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [27] Generative Adversarial Networks for Black-Box API Attacks with Limited Training Data
    Shi, Yi
    Sagduyu, Yalin E.
    Davaslioglu, Kemal
    Li, Jason H.
    2018 IEEE INTERNATIONAL SYMPOSIUM ON SIGNAL PROCESSING AND INFORMATION TECHNOLOGY (ISSPIT), 2018, : 453 - 458
  • [28] Output regeneration defense against membership inference attacks for protecting data privacy
    Ding, Yong
    Huang, Peixiong
    Liang, Hai
    Yuan, Fang
    Wang, Huiyong
    INTERNATIONAL JOURNAL OF WEB INFORMATION SYSTEMS, 2023, : 61 - 79
  • [29] Membership Inference Attacks Against Temporally Correlated Data in Deep Reinforcement Learning
    Gomrokchi, Maziar
    Amin, Susan
    Aboutalebi, Hossein
    Wong, Alexander
    Precup, Doina
    IEEE ACCESS, 2023, 11 : 42796 - 42808
  • [30] GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models
    Chen, Dingfan
    Yu, Ning
    Zhang, Yang
    Fritz, Mario
    CCS '20: PROCEEDINGS OF THE 2020 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2020, : 343 - 362