GanNoise: Defending against black-box membership inference attacks by countering noise generation

被引:2
作者
Liang, Jiaming [1 ]
Huang, Teng [1 ]
Luo, Zidan [1 ]
Li, Dan [1 ]
Li, Yunhao [1 ]
Ding, Ziyu [1 ]
机构
[1] Guangzhou Univ, Guangzhou, Peoples R China
来源
2023 INTERNATIONAL CONFERENCE ON DATA SECURITY AND PRIVACY PROTECTION, DSPP | 2023年
基金
中国国家自然科学基金;
关键词
data privacy; deep learning; MIA defense;
D O I
10.1109/DSPP58763.2023.10405019
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
In recent years, data privacy in deep learning has seen a notable surge of interest. Pretrained large-scale datadriven models are potential to be attacked risky with membership inference attacks. However, the current corresponding defenses to prevent the leak of data may reduce the performance of pre-trained models. In this paper, we propose a novel training framework called GanNoise that preserves privacy by maintaining the accuracy of classification tasks. Through utilizing adversarial regularization to train a noise generation model, we generate noise that adds randomness to private data during model training, effectively preventing excessive memorization of the actual training data. Our experimental results illustrate the efficacy of the framework against existing attack schemes on various datasets while outperforming advanced MIA defense solutions in terms of efficiency.
引用
收藏
页码:32 / 40
页数:9
相关论文
共 50 条
  • [31] Membership Inference Attacks Against Temporally Correlated Data in Deep Reinforcement Learning
    Gomrokchi, Maziar
    Amin, Susan
    Aboutalebi, Hossein
    Wong, Alexander
    Precup, Doina
    IEEE ACCESS, 2023, 11 : 42796 - 42808
  • [32] Black-Box Audio Adversarial Example Generation Using Variational Autoencoder
    Zong, Wei
    Chow, Yang-Wai
    Susilo, Willy
    INFORMATION AND COMMUNICATIONS SECURITY (ICICS 2021), PT II, 2021, 12919 : 142 - 160
  • [33] MalFox: Camouflaged Adversarial Malware Example Generation Based on Conv-GANs Against Black-Box Detectors
    Zhong, Fangtian
    Cheng, Xiuzhen
    Yu, Dongxiao
    Gong, Bei
    Song, Shuaiwen
    Yu, Jiguo
    IEEE TRANSACTIONS ON COMPUTERS, 2024, 73 (04) : 980 - 993
  • [34] Defending against sparse adversarial attacks using impulsive noise reduction filters
    Radlak, Krystian
    Szczepankiewicz, Michal
    Smolka, Bogdan
    REAL-TIME IMAGE PROCESSING AND DEEP LEARNING 2021, 2021, 11736
  • [35] Simple Black-Box Adversarial Examples Generation with Very Few Queries
    Senzaki, Yuya
    Ohata, Satsuya
    Matsuura, Kanta
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2020, E103D (02) : 212 - 221
  • [36] Boosting Targeted Black-Box Attacks via Ensemble Substitute Training and Linear Augmentation
    Gao, Xianfeng
    Tan, Yu-an
    Jiang, Hongwei
    Zhang, Quanxin
    Kuang, Xiaohui
    APPLIED SCIENCES-BASEL, 2019, 9 (11):
  • [37] mDARTS: Searching ML-Based ECG Classifiers Against Membership Inference Attacks
    Park, Eunbin
    Lee, Youngjoo
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2025, 29 (01) : 177 - 187
  • [38] When Side-Channel Attacks Break the Black-Box Property of Embedded Artificial Intelligence
    Coqueret, Benoit
    Carbone, Mathieu
    Sentieys, Olivier
    Zaid, Gabriel
    PROCEEDINGS OF THE 16TH ACM WORKSHOP ON ARTIFICIAL INTELLIGENCE AND SECURITY, AISEC 2023, 2023, : 127 - 138
  • [39] GCSA: A New Adversarial Example-Generating Scheme Toward Black-Box Adversarial Attacks
    Fan, Xinxin
    Li, Mengfan
    Zhou, Jia
    Jing, Quanliang
    Lin, Chi
    Lu, Yunfeng
    Bi, Jingping
    IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2024, 70 (01) : 2038 - 2048
  • [40] Query-Efficient Black-Box Adversarial Attacks Guided by a Transfer-Based Prior
    Dong, Yinpeng
    Cheng, Shuyu
    Pang, Tianyu
    Su, Hang
    Zhu, Jun
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (12) : 9536 - 9548