Machine Learning with Membership Privacy using Adversarial Regularization

被引:226
|
作者
Nasr, Milad [1 ]
Shokri, Reza [2 ]
Houmansadr, Amir [1 ]
机构
[1] Univ Massachusetts, Amherst, MA 01003 USA
[2] Natl Univ Singapore, Singapore, Singapore
关键词
Data privacy; Machine learning; Inference attacks; Membership privacy; Indistinguishability; Min-max game; Adversarial process;
D O I
10.1145/3243734.3243855
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Machine learning models leak significant amount of information about their training sets, through their predictions. This is a serious privacy concern for the users of machine learning as a service. To address this concern, in this paper, we focus on mitigating the risks of black-box inference attacks against machine learning models. We introduce a mechanism to train models with membership privacy, which ensures indistinguishability between the predictions of a model on its training data and other data points (from the same distribution). This requires minimizing the accuracy of the best black-box membership inference attack against the model. We formalize this as a min-max game, and design an adversarial training algorithm that minimizes the prediction loss of the model as well as the maximum gain of the inference attacks. This strategy, which can guarantee membership privacy (as prediction indistinguishability), acts also as a strong regularizer and helps generalizing the model. We evaluate the practical feasibility of our privacy mechanism on training deep neural networks using benchmark datasets. We show that the min-max strategy can mitigate the risks of membership inference attacks (near random guess), and can achieve this with a negligible drop in the model's prediction accuracy (less than 4%).
引用
收藏
页码:634 / 646
页数:13
相关论文
共 50 条
  • [1] Machine Learning Integrity and Privacy in Adversarial Environments
    Oprea, Alina
    PROCEEDINGS OF THE 26TH ACM SYMPOSIUM ON ACCESS CONTROL MODELS AND TECHNOLOGIES, SACMAT 2021, 2021, : 1 - 2
  • [2] Membership Privacy for Machine Learning Models Through Knowledge Transfer
    Shejwalkar, Virat
    Houmansadr, Amir
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 9549 - 9557
  • [3] Defending Emotional Privacy with Adversarial Machine Learning for Social Good
    Al-Maliki, Shawqi
    Abdallah, Mohamed
    Qadir, Junaid
    Al-Fuqaha, Ala
    2023 INTERNATIONAL WIRELESS COMMUNICATIONS AND MOBILE COMPUTING, IWCMC, 2023, : 345 - 350
  • [4] Beyond Model-Level Membership Privacy Leakage: an Adversarial Approach in Federated Learning
    Chen, Jiale
    Zhang, Jiale
    Zhao, Yanchao
    Han, Hao
    Zhu, Kun
    Chen, Bing
    2020 29TH INTERNATIONAL CONFERENCE ON COMPUTER COMMUNICATIONS AND NETWORKS (ICCCN 2020), 2020,
  • [5] Privacy Risks of Securing Machine Learning Models against Adversarial Examples
    Song, Liwei
    Shokri, Reza
    Mittal, Prateek
    PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, : 241 - 257
  • [6] Privacy Protection of Grid Users Data with Blockchain and Adversarial Machine Learning
    Yilmaz, Ibrahim
    Kapoor, Kavish
    Siraj, Ambareen
    Abouyoussef, Mahmoud
    SAT-CPS'21: PROCEEDINGS OF THE 2021 ACM WORKSHOP ON SECURE AND TRUSTWORTHY CYBER-PHYSICAL SYSTEMS, 2021, : 33 - 38
  • [7] Effective Adversarial Regularization for Neural Machine Translation
    Sato, Motoki
    Suzuki, Jun
    Kiyono, Shun
    57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 204 - 210
  • [8] Adversarial interference and its mitigations in privacy-preserving collaborative machine learning
    Dmitrii Usynin
    Alexander Ziller
    Marcus Makowski
    Rickmer Braren
    Daniel Rueckert
    Ben Glocker
    Georgios Kaissis
    Jonathan Passerat-Palmbach
    Nature Machine Intelligence, 2021, 3 : 749 - 758
  • [9] Adversarial interference and its mitigations in privacy-preserving collaborative machine learning
    Usynin, Dmitrii
    Ziller, Alexander
    Makowski, Marcus
    Braren, Rickmer
    Rueckert, Daniel
    Glocker, Ben
    Kaissis, Georgios
    Passerat-Palmbach, Jonathan
    NATURE MACHINE INTELLIGENCE, 2021, 3 (09) : 749 - 758
  • [10] EAR: An Enhanced Adversarial Regularization Approach against Membership Inference Attacks
    Hu, Hongsheng
    Salcic, Zoran
    Dobbie, Gillian
    Chen, Yi
    Zhang, Xuyun
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,