GanNoise: Defending against black-box membership inference attacks by countering noise generation

被引:2
作者
Liang, Jiaming [1 ]
Huang, Teng [1 ]
Luo, Zidan [1 ]
Li, Dan [1 ]
Li, Yunhao [1 ]
Ding, Ziyu [1 ]
机构
[1] Guangzhou Univ, Guangzhou, Peoples R China
来源
2023 INTERNATIONAL CONFERENCE ON DATA SECURITY AND PRIVACY PROTECTION, DSPP | 2023年
基金
中国国家自然科学基金;
关键词
data privacy; deep learning; MIA defense;
D O I
10.1109/DSPP58763.2023.10405019
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
In recent years, data privacy in deep learning has seen a notable surge of interest. Pretrained large-scale datadriven models are potential to be attacked risky with membership inference attacks. However, the current corresponding defenses to prevent the leak of data may reduce the performance of pre-trained models. In this paper, we propose a novel training framework called GanNoise that preserves privacy by maintaining the accuracy of classification tasks. Through utilizing adversarial regularization to train a noise generation model, we generate noise that adds randomness to private data during model training, effectively preventing excessive memorization of the actual training data. Our experimental results illustrate the efficacy of the framework against existing attack schemes on various datasets while outperforming advanced MIA defense solutions in terms of efficiency.
引用
收藏
页码:32 / 40
页数:9
相关论文
共 50 条
  • [41] POBA-GA: Perturbation optimized black-box adversarial attacks via genetic algorithm
    Chen, Jinyin
    Su, Mengmeng
    Shen, Shijing
    Xiong, Hui
    Zheng, Haibin
    COMPUTERS & SECURITY, 2019, 85 : 89 - 106
  • [42] Evaluation of Four Black-box Adversarial Attacks and Some Query-efficient Improvement Analysis
    Wang, Rui
    2022 PROGNOSTICS AND HEALTH MANAGEMENT CONFERENCE, PHM-LONDON 2022, 2022, : 298 - 302
  • [43] Categorical Inference Poisoning: Verifiable Defense Against Black-Box DNN Model Stealing Without Constraining Surrogate Data and Query Times
    Zhang, Haitian
    Hua, Guang
    Wang, Xinya
    Jiang, Hao
    Yang, Wen
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 1473 - 1486
  • [44] Black-box Adversarial Examples against Intelligent Beamforming in 5G Networks
    Zolotukhin, Mikhail
    Miraghaie, Parsa
    Zhang, Di
    Hamalainen, Timo
    Ke, Wang
    Dunderfelt, Marja
    2022 IEEE CONFERENCE ON STANDARDS FOR COMMUNICATIONS AND NETWORKING, CSCN, 2022, : 64 - 70
  • [45] DeeBBAA: A Benchmark Deep Black-Box Adversarial Attack Against CyberPhysical Power Systems
    Bhattacharjee, Arnab
    Bai, Guangdong
    Tushar, Wayes
    Verma, Ashu
    Mishra, Sukumar
    Saha, Tapan K.
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (24): : 40670 - 40688
  • [46] An empirical study of derivative-free-optimization algorithms for targeted black-box attacks in deep neural networks
    Giuseppe Ughi
    Vinayak Abrol
    Jared Tanner
    Optimization and Engineering, 2022, 23 : 1319 - 1346
  • [47] An empirical study of derivative-free-optimization algorithms for targeted black-box attacks in deep neural networks
    Ughi, Giuseppe
    Abrol, Vinayak
    Tanner, Jared
    OPTIMIZATION AND ENGINEERING, 2022, 23 (03) : 1319 - 1346
  • [48] Privacy Preserving Defense For Black Box Classifiers Against On-Line Adversarial Attacks
    Theagarajan, Rajkumar
    Bhanu, Bir
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (12) : 9503 - 9520
  • [49] Secure Deep Neural Network Models Publishing Against Membership Inference Attacks Via Training Task Parallelism
    Mao, Yunlong
    Hong, Wenbo
    Zhu, Boyu
    Zhu, Zhifei
    Zhang, Yuan
    Zhong, Sheng
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2022, 33 (11) : 3079 - 3091
  • [50] Defending Deep Learning Based Anomaly Detection Systems Against White-Box Adversarial Examples and Backdoor Attacks
    Alrawashdeh, Khaled
    Goldsmith, Stephen
    PROCEEDINGS OF THE 2020 IEEE INTERNATIONAL SYMPOSIUM ON TECHNOLOGY AND SOCIETY (ISTAS), 2021, : 294 - 301