EAR: An Enhanced Adversarial Regularization Approach against Membership Inference Attacks

被引:2
|
作者
Hu, Hongsheng [1 ]
Salcic, Zoran [1 ]
Dobbie, Gillian [2 ]
Chen, Yi [3 ]
Zhang, Xuyun [4 ]
机构
[1] Univ Auckland, Dept ECE, Auckland, New Zealand
[2] Univ Auckland, Sch Comp Sci, Auckland, New Zealand
[3] Southwest Jiaotong Univ, Sch Informat Sci & Technol, Chengdu, Peoples R China
[4] Macquarie Univ, Dept Comp, Sydney, NSW, Australia
来源
2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN) | 2021年
关键词
Data privacy; Membership inference attacks; Adversarial regularization; Machine learning;
D O I
10.1109/IJCNN52387.2021.9534381
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Membership inference attacks on a machine learning model aim to determine whether a given data record is a member of the training set. They pose severe privacy risks to individuals, e.g., identifying an individual's participation in a hospital's health analytic training set reveals that this individual was once a patient in that hospital. Adversarial regularization (AR) is one of the state-of-the-art defense methods that mitigate such attacks while preserving a model's prediction accuracy. AR adds membership inference attacks as a new regularization term to the target model during the training process. It is an adversarial training algorithm that is trained on a defended model which is essentially the same as training the generator of generative adversarial networks (GANs). We observe that many GAN variants are able to generate higher quality samples and offer more stability during the training phase than GANs. However, whether these GAN variants are available to improve the effectiveness of AR has not been investigated. In this paper, we propose an enhanced adversarial regularization (EAR) method based on Least Square GANs (LSGANs). The new EAR surpasses the existing AR in offering more powerful defensive ability while preserving the same prediction accuracy of the protected classifiers. We systematically evaluate EAR on five datasets with different target classifiers under four different attack methods and compare it with four other defense methods. We experimentally show that our new method performs the best among other defense methods.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Defending against Membership Inference Attacks in Federated learning via Adversarial Example
    Xie, Yuanyuan
    Chen, Bing
    Zhang, Jiale
    Wu, Di
    2021 17TH INTERNATIONAL CONFERENCE ON MOBILITY, SENSING AND NETWORKING (MSN 2021), 2021, : 153 - 160
  • [2] LocMIA: Membership Inference Attacks Against Aggregated Location Data
    Zhang, Guanglin
    Zhang, Anqi
    Zhao, Ping
    IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (12) : 11778 - 11788
  • [3] BAN-MPR: Defending against Membership Inference Attacks with Born Again Networks and Membership Privacy Regularization
    Liu, Yiqing
    Yu, Juan
    Han, Jianmin
    2022 INTERNATIONAL CONFERENCE ON COMPUTERS AND ARTIFICIAL INTELLIGENCE TECHNOLOGIES, CAIT, 2022, : 9 - 15
  • [4] MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples
    Jia, Jinyuan
    Salem, Ahmed
    Backes, Michael
    Zhang, Yang
    Gong, Neil Zhenqiang
    PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, : 259 - 274
  • [5] VAE-Based Membership Cleanser Against Membership Inference Attacks
    Hu, Li
    Yan, Hongyang
    Peng, Yun
    Hu, Haibo
    Wang, Shaowei
    Li, Jin
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2025, 22 (02) : 1253 - 1264
  • [6] Towards Securing Machine Learning Models Against Membership Inference Attacks
    Ben Hamida, Sana
    Mrabet, Hichem
    Belguith, Sana
    Alhomoud, Adeeb
    Jemai, Abderrazak
    CMC-COMPUTERS MATERIALS & CONTINUA, 2022, 70 (03): : 4897 - 4919
  • [7] Source Inference Attacks: Beyond Membership Inference Attacks in Federated Learning
    Hu, Hongsheng
    Zhang, Xuyun
    Salcic, Zoran
    Sun, Lichao
    Choo, Kim-Kwang Raymond
    Dobbie, Gillian
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (04) : 3012 - 3029
  • [8] Assessment of data augmentation, dropout with L2 Regularization and differential privacy against membership inference attacks
    Sana Ben Hamida
    Hichem Mrabet
    Faten Chaieb
    Abderrazak Jemai
    Multimedia Tools and Applications, 2024, 83 : 44455 - 44484
  • [9] Assessment of data augmentation, dropout with L2 Regularization and differential privacy against membership inference attacks
    Ben Hamida, Sana
    Mrabet, Hichem
    Chaieb, Faten
    Jemai, Abderrazak
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (15) : 44455 - 44484
  • [10] Defending Against Membership Inference Attacks With High Utility by GAN
    Hu, Li
    Li, Jin
    Lin, Guanbiao
    Peng, Shiyu
    Zhang, Zhenxin
    Zhang, Yingying
    Dong, Changyu
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (03) : 2144 - 2157