EAR: An Enhanced Adversarial Regularization Approach against Membership Inference Attacks

被引:2
|
作者
Hu, Hongsheng [1 ]
Salcic, Zoran [1 ]
Dobbie, Gillian [2 ]
Chen, Yi [3 ]
Zhang, Xuyun [4 ]
机构
[1] Univ Auckland, Dept ECE, Auckland, New Zealand
[2] Univ Auckland, Sch Comp Sci, Auckland, New Zealand
[3] Southwest Jiaotong Univ, Sch Informat Sci & Technol, Chengdu, Peoples R China
[4] Macquarie Univ, Dept Comp, Sydney, NSW, Australia
来源
2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN) | 2021年
关键词
Data privacy; Membership inference attacks; Adversarial regularization; Machine learning;
D O I
10.1109/IJCNN52387.2021.9534381
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Membership inference attacks on a machine learning model aim to determine whether a given data record is a member of the training set. They pose severe privacy risks to individuals, e.g., identifying an individual's participation in a hospital's health analytic training set reveals that this individual was once a patient in that hospital. Adversarial regularization (AR) is one of the state-of-the-art defense methods that mitigate such attacks while preserving a model's prediction accuracy. AR adds membership inference attacks as a new regularization term to the target model during the training process. It is an adversarial training algorithm that is trained on a defended model which is essentially the same as training the generator of generative adversarial networks (GANs). We observe that many GAN variants are able to generate higher quality samples and offer more stability during the training phase than GANs. However, whether these GAN variants are available to improve the effectiveness of AR has not been investigated. In this paper, we propose an enhanced adversarial regularization (EAR) method based on Least Square GANs (LSGANs). The new EAR surpasses the existing AR in offering more powerful defensive ability while preserving the same prediction accuracy of the protected classifiers. We systematically evaluate EAR on five datasets with different target classifiers under four different attack methods and compare it with four other defense methods. We experimentally show that our new method performs the best among other defense methods.
引用
收藏
页数:8
相关论文
共 50 条
  • [21] GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models
    Chen, Dingfan
    Yu, Ning
    Zhang, Yang
    Fritz, Mario
    CCS '20: PROCEEDINGS OF THE 2020 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2020, : 343 - 362
  • [22] Membership Inference Attacks Against Machine Learning Models via Prediction Sensitivity
    Liu, Lan
    Wang, Yi
    Liu, Gaoyang
    Peng, Kai
    Wang, Chen
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (03) : 2341 - 2347
  • [23] Membership Inference Attacks against GANs by Leveraging Over-representation Regions
    Hu, Hailong
    Pang, Jun
    CCS '21: PROCEEDINGS OF THE 2021 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2021, : 2387 - 2389
  • [24] Efficient Adversarial Training with Membership Inference Resistance
    Yan, Ran
    Du, Ruiying
    He, Kun
    Chen, Jing
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT I, 2024, 14425 : 474 - 486
  • [25] Natural Language Inference Based on Adversarial Regularization
    Liu G.-C.
    Cao Y.
    Xu J.-M.
    Xu B.
    Zidonghua Xuebao/Acta Automatica Sinica, 2019, 45 (08): : 1455 - 1463
  • [26] Membership Inference Attacks on Machine Learning: A Survey
    Hu, Hongsheng
    Salcic, Zoran
    Sun, Lichao
    Dobbie, Gillian
    Yu, Philip S.
    Zhang, Xuyun
    ACM COMPUTING SURVEYS, 2022, 54 (11S)
  • [27] Enhance membership inference attacks in federated learning
    He, Xinlong
    Xu, Yang
    Zhang, Sicong
    Xu, Weida
    Yan, Jiale
    COMPUTERS & SECURITY, 2024, 136
  • [28] Membership Inference Attacks and Generalization: A Causal Perspective
    Baluta, Teodora
    Shen, Shiqi
    Hitarth, S.
    Tople, Shruti
    Saxena, Prateek
    PROCEEDINGS OF THE 2022 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, CCS 2022, 2022, : 249 - 262
  • [29] mDARTS: Searching ML-Based ECG Classifiers Against Membership Inference Attacks
    Park, Eunbin
    Lee, Youngjoo
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2025, 29 (01) : 177 - 187
  • [30] A Robust Approach for Securing Audio Classification Against Adversarial Attacks
    Esmaeilpour, Mohammad
    Cardinal, Patrick
    Koerich, Alessandro
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2020, 15 : 2147 - 2159