Membership Inference Attacks Against Deep Learning Models via Logits Distribution

被引:3
作者
Yan, Hongyang [1 ,2 ]
Li, Shuhao [3 ]
Wang, Yajie [4 ]
Zhang, Yaoyuan [3 ]
Sharif, Kashif [3 ]
Hu, Haibo [5 ]
Li, Yuanzhang [3 ]
机构
[1] Guangzhou Univ, Inst Artificial Intelligence & Blockchain, Guangzhou 510006, Peoples R China
[2] Hong Kong Polytech Univ, Dept Elect & Informat Engn, Hong Kong 999077, Peoples R China
[3] Beijing Inst Technol, Sch Comp Sci & Technol, Beijing 100081, Peoples R China
[4] Beijing Inst Technol, Sch Cyberspace Sci & Technol, Beijing 100081, Peoples R China
[5] Guangzhou Univ, Inst Artificial Intelligence & Blockchain, Guangzhou 510006, Peoples R China
基金
中国国家自然科学基金;
关键词
Data models; Training; Deep learning; Computational modeling; Federated learning; Data privacy; Predictive models; membership inference attacks (MIAs); logits; substitute model;
D O I
10.1109/TDSC.2022.3222880
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep Learning(DL) techniques have gained significant importance in the recent past due to their vast applications. However, DL is still prone to several attacks, such as the Membership Inference Attack (MIA), based on the memorability of training data. MIA aims at determining the presence of specific data in the training dataset of the model with substitute model of similar structure to the objective model. As MIA relies on the substitute model, they can be mitigated if the substitute model is not clear about the network structure of the objective model. To solve the challenge of shadow-model construction, thiswork presents L-Leaks, a member inference attack based on Logits. L-Leaks allow an adversary to use the substitute model's information to predict the presence of membership if the shadow and objective model are similar enough. Here, the substitute model is built by learning the logits of the objective model, hence making it similar enough. This results in the substitute model having sufficient confidence in the member samples of the objective model. The evaluation of the attack's success shows that the proposed technique can execute the attack more accurately than existing techniques. It also shows that the proposed MIA is significantly robust under different network models and datasets.
引用
收藏
页码:3799 / 3808
页数:10
相关论文
共 42 条
  • [1] [Anonymous], 2017, Privacy risk in machine learning: Analyzing the connection to overfitting
  • [2] Carlini N, 2019, PROCEEDINGS OF THE 28TH USENIX SECURITY SYMPOSIUM, P267
  • [3] Practical Membership Inference Attack Against Collaborative Inference in Industrial IoT
    Chen, Hanxiao
    Li, Hongwei
    Dong, Guishan
    Hao, Meng
    Xu, Guowen
    Huang, Xiaoming
    Liu, Zhe
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (01) : 477 - 487
  • [4] CHEN M, 2020, WHEN MACHINE UNLEARN
  • [5] Communication-Efficient Federated Deep Learning With Layerwise Asynchronous Model Update and Temporally Weighted Aggregation
    Chen, Yang
    Sun, Xiaoyan
    Jin, Yaochu
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (10) : 4229 - 4238
  • [6] Choo C. A. C., 2021, PMLR, P1964
  • [7] Fredrikson M, 2014, PROCEEDINGS OF THE 23RD USENIX SECURITY SYMPOSIUM, P17
  • [8] Property Inference Attacks on Fully Connected Neural Networks using Permutation Invariant Representations
    Ganju, Karan
    Wang, Qi
    Yang, Wei
    Gunter, Carl A.
    Borisov, Nikita
    [J]. PROCEEDINGS OF THE 2018 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'18), 2018, : 619 - 633
  • [9] Hayes Jamie, 2019, Proceedings on Privacy Enhancing Technologies, V2019, P133, DOI 10.2478/popets-2019-0008
  • [10] HAYES J, 2019, LOGAN EVALUATING INF