GanNoise: Defending against black-box membership inference attacks by countering noise generation

被引:2
|
作者
Liang, Jiaming [1 ]
Huang, Teng [1 ]
Luo, Zidan [1 ]
Li, Dan [1 ]
Li, Yunhao [1 ]
Ding, Ziyu [1 ]
机构
[1] Guangzhou Univ, Guangzhou, Peoples R China
来源
2023 INTERNATIONAL CONFERENCE ON DATA SECURITY AND PRIVACY PROTECTION, DSPP | 2023年
基金
中国国家自然科学基金;
关键词
data privacy; deep learning; MIA defense;
D O I
10.1109/DSPP58763.2023.10405019
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
In recent years, data privacy in deep learning has seen a notable surge of interest. Pretrained large-scale datadriven models are potential to be attacked risky with membership inference attacks. However, the current corresponding defenses to prevent the leak of data may reduce the performance of pre-trained models. In this paper, we propose a novel training framework called GanNoise that preserves privacy by maintaining the accuracy of classification tasks. Through utilizing adversarial regularization to train a noise generation model, we generate noise that adds randomness to private data during model training, effectively preventing excessive memorization of the actual training data. Our experimental results illustrate the efficacy of the framework against existing attack schemes on various datasets while outperforming advanced MIA defense solutions in terms of efficiency.
引用
收藏
页码:32 / 40
页数:9
相关论文
共 50 条
  • [1] Orthogonal Deep Models as Defense Against Black-Box Attacks
    Jalwana, Mohammad A. A. K.
    Akhtar, Naveed
    Bennamoun, Mohammed
    Mian, Ajmal
    IEEE ACCESS, 2020, 8 : 119744 - 119757
  • [2] Mitigating Black-Box Adversarial Attacks via Output Noise Perturbation
    Aithal, Manjushree B.
    Li, Xiaohua
    IEEE ACCESS, 2022, 10 : 12395 - 12411
  • [3] Understanding and defending against White-box membership inference attack in deep learning
    Wu, Di
    Qi, Saiyu
    Qi, Yong
    Li, Qian
    Cai, Bowen
    Guo, Qi
    Cheng, Jingxian
    KNOWLEDGE-BASED SYSTEMS, 2023, 259
  • [4] Black-box attacks against log anomaly detection with adversarial examples
    Lu, Siyang
    Wang, Mingquan
    Wang, Dongdong
    Wei, Xiang
    Xiao, Sizhe
    Wang, Zhiwei
    Han, Ningning
    Wang, Liqiang
    INFORMATION SCIENCES, 2023, 619 : 249 - 262
  • [5] Beating White-Box Defenses with Black-Box Attacks
    Kumova, Vera
    Pilat, Martin
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [6] Membership inference attacks against compression models
    Jin, Yong
    Lou, Weidong
    Gao, Yanghua
    COMPUTING, 2023, 105 (11) : 2419 - 2442
  • [7] Membership inference attacks against compression models
    Yong Jin
    Weidong Lou
    Yanghua Gao
    Computing, 2023, 105 : 2419 - 2442
  • [8] Black-Box Reward Attacks Against Deep Reinforcement Learning Based on Successor Representation
    Cai, Kanting
    Zhu, Xiangbin
    Hu, Zhao-Long
    IEEE ACCESS, 2022, 10 : 51548 - 51560
  • [9] Your Voice is Not Yours? Black-Box Adversarial Attacks Against Speaker Recognition Systems
    Ye, Jianbin
    Lin, Fuqiang
    Liu, Xiaoyuan
    Liu, Bo
    2022 IEEE INTL CONF ON PARALLEL & DISTRIBUTED PROCESSING WITH APPLICATIONS, BIG DATA & CLOUD COMPUTING, SUSTAINABLE COMPUTING & COMMUNICATIONS, SOCIAL COMPUTING & NETWORKING, ISPA/BDCLOUD/SOCIALCOM/SUSTAINCOM, 2022, : 692 - 699
  • [10] Physical Black-Box Adversarial Attacks Through Transformations
    Jiang, Wenbo
    Li, Hongwei
    Xu, Guowen
    Zhang, Tianwei
    Lu, Rongxing
    IEEE TRANSACTIONS ON BIG DATA, 2023, 9 (03) : 964 - 974