Comprehensive Privacy Analysis of Deep Learning Passive and Active White-box Inference Attacks against Centralized and Federated Learning

被引:851
作者
Nasr, Milad [1 ]
Shokri, Reza [2 ]
Houmansadr, Amir [1 ]
机构
[1] Univ Massachusetts, Amherst, MA 01003 USA
[2] Natl Univ Singapore, Singapore, Singapore
来源
2019 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP 2019) | 2019年
关键词
D O I
10.1109/SP.2019.00065
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Deep neural networks are susceptible to various inference attacks as they remember information about their training data. We design white-box inference attacks to perform a comprehensive privacy analysis of deep learning models. We measure the privacy leakage through parameters of fully trained models as well as the parameter updates of models during training. We design inference algorithms for both centralized and federated learning, with respect to passive and active inference attackers, and assuming different adversary prior knowledge. We evaluate our novel white-box membership inference attacks against deep learning algorithms to trace their training data records. We show that a straightforward extension of the known black-box attacks to the white-box setting (through analyzing the outputs of activation functions) is ineffective. We therefore design new algorithms tailored to the white-box setting by exploiting the privacy vulnerabilities of the stochastic gradient descent algorithm, which is the algorithm used to train deep neural networks. We investigate the reasons why deep learning models may leak information about their training data. We then show that even well-generalized models are significantly susceptible to white-box membership inference attacks, by analyzing state-of-the-art pre-trained and publicly available models for the CIFAR dataset. We also show how adversarial participants, in the federated learning setting, can successfully run active membership inference attacks against other participants, even when the global model achieves high prediction accuracies.
引用
收藏
页码:739 / 753
页数:15
相关论文
共 50 条
  • [41] SPEFL: Efficient Security and Privacy-Enhanced Federated Learning Against Poisoning Attacks
    Shen, Liyan
    Ke, Zhenhan
    Shi, Jinqiao
    Zhang, Xi
    Sun, Yanwei
    Zhao, Jiapeng
    Wang, Xuebin
    Zhao, Xiaojie
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (08): : 13437 - 13451
  • [42] VPPFL: A verifiable privacy-preserving federated learning scheme against poisoning attacks
    Huang, Yuxian
    Yang, Geng
    Zhou, Hao
    Dai, Hua
    Yuan, Dong
    Yu, Shui
    COMPUTERS & SECURITY, 2024, 136
  • [43] Broadening Differential Privacy for Deep Learning Against Model Inversion Attacks
    Zhang, Qiuchen
    Ma, Jing
    Xiao, Yonghui
    Lou, Jian
    Xiong, Li
    2020 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2020, : 1061 - 1070
  • [44] MIXNN: Protection of Federated Learning Against Inference Attacks by Mixing Neural Network Layers
    Lebrun, Thomas
    Boutet, Antoine
    Aalmoes, Jan
    Baud, Adrien
    PROCEEDINGS OF THE TWENTY-THIRD ACM/IFIP INTERNATIONAL MIDDLEWARE CONFERENCE, MIDDLEWARE 2022, 2022, : 135 - 147
  • [45] Digestive neural networks: A novel defense strategy against inference attacks in federated learning
    Lee, Hongkyu
    Kim, Jeehyeong
    Ahn, Seyoung
    Hussain, Rasheed
    Cho, Sunghyun
    Son, Junggab
    COMPUTERS & SECURITY, 2021, 109
  • [46] White-Box Analysis over Machine Learning: Modeling Performance of Configurable Systems
    Velez, Miguel
    Jamshidi, Pooyan
    Siegmund, Norbert
    Apel, Sven
    Kastner, Christian
    2021 IEEE/ACM 43RD INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE 2021), 2021, : 1072 - 1084
  • [47] Homomorphic Encryption-Based Federated Privacy Preservation for Deep Active Learning
    Kurniawan, Hendra
    Mambo, Masahiro
    ENTROPY, 2022, 24 (11)
  • [48] Privacy for Free: Spy Attack in Vertical Federated Learning by Both Active and Passive Parties
    Fu, Chaohao
    Chen, Hongbin
    Ruan, Na
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 2550 - 2563
  • [49] FL-PTD: A Privacy Preserving Defense Strategy Against Poisoning Attacks in Federated Learning
    Xia, Geming
    Chen, Jian
    Huang, Xinyi
    Yu, Chaodong
    Zhang, Zhong
    2023 IEEE 47TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE, COMPSAC, 2023, : 735 - 740
  • [50] Privacy-Encoded Federated Learning Against Gradient-Based Data Reconstruction Attacks
    Liu, Hongfu
    Li, Bin
    Gao, Changlong
    Xie, Pei
    Zhao, Chenglin
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 5860 - 5875