Membership inference attack on differentially private block coordinate descent

被引:0
作者
Riaz S. [1 ,2 ]
Ali S. [2 ,3 ]
Wang G. [3 ]
Latif M.A. [2 ]
Iqbal M.Z. [4 ]
机构
[1] School of Computing, Macquarie University, Sydney
[2] Department of Computer Science, University of Agriculture, Punjab, Faisalabad
[3] School of Computing, Guangzhou University, Guangzhou
[4] Department of Mathematics and Statistics, University of Agriculture Faisalabad, Punjab, Faisalabad
基金
中国国家自然科学基金;
关键词
Differential privacy; Differentially private block coordinate descent; Membership inference attack; Privacy-preserving deep learning;
D O I
10.7717/PEERJ-CS.1616
中图分类号
学科分类号
摘要
The extraordinary success of deep learning is made possible due to the availability of crowd-sourced large-scale training datasets. Mostly, these datasets contain personal and confidential information, thus, have great potential of being misused, raising privacy concerns. Consequently, privacy-preserving deep learning has become a primary research interest nowadays. One of the prominent approaches adopted to prevent the leakage of sensitive information about the training data is by implementing differential privacy during training for their differentially private training, which aims to preserve the privacy of deep learning models. Though these models are claimed to be a safeguard against privacy attacks targeting sensitive information, however, least amount of work is found in the literature to practically evaluate their capability by performing a sophisticated attack model on them. Recently, DP-BCD is proposed as an alternative to state-of-the-art DP-SGD, to preserve the privacy of deep-learning models, having low privacy cost and fast convergence speed with highly accurate prediction results. To check its practical capability, in this article, we analytically evaluate the impact of a sophisticated privacy attack called the membership inference attack against it in both black box as well as white box settings. More precisely, we inspect how much information can be inferred from a differentially private deep model’s training data. We evaluate our experiments on benchmark datasets using AUC, attacker advantage, precision, recall, and F1-score performance metrics. The experimental results exhibit that DP-BCD keeps its promise to preserve privacy against strong adversaries while providing acceptable model utility compared to state-of-the-art techniques. © 2023 Riaz et al.
引用
收藏
相关论文
共 78 条
[1]  
Abadi M, Chu A, Goodfellow I, McMahan HB, Mironov I, Talwar K, Zhang L., Deep learning with differential privacy, Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 308-318, (2016)
[2]  
Adesuyi TA, Kim BM., A neuron noise-injection technique for privacy preserving deep neural networks, Open Computer Science, 10, 1, pp. 137-152, (2020)
[3]  
Ali S, Ashraf S, Yousaf MS, Riaz S, Wang G., Automated segmentation to make hidden trigger backdoor attacks robust against deep neural networks, Applied Sciences, 13, 7, (2023)
[4]  
Ali S, Wang G, Riaz S, Rafique T., Preserving the privacy of dependent tuples using enhanced differential privacy, Human-Centric Computing and Information Sciences, 12, pp. 1-15, (2022)
[5]  
Ateniese G, Mancini LV, Spognardi A, Villani A, Vitali D, Felici G., Hacking smart machines with smarter ones: how to extract meaningful data from machine learning classifiers, International Journal of Security and Networks, 10, 3, pp. 137-150, (2015)
[6]  
Backes M, Humbert M, Pang J, Zhang Y., Walk2friends: inferring social links from mobility profiles, Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS ’17, pp. 1943-1957, (2017)
[7]  
Bernau D, Grassal P-W, Robl J, Kerschbaum F., Assessing differentially private deep learning with membership inference, (2019)
[8]  
Berrang P, Humbert M, Zhang Y, Lehmann I, Eils R, Backes M., Dissecting privacy risks in biomedical data, 2018 IEEE European Symposium on Security and Privacy (EuroS&P), pp. 62-76, (2018)
[9]  
Calandrino JA, Kilzer A, Narayanan A, Felten EW, Shmatikov V., You might also like:” privacy risks of collaborative filtering, 2011 IEEE Symposium on Security and Privacy, pp. 231-246, (2011)
[10]  
Carlini N, Liu C, Erlingsson U, Kos J, Song D., The secret sharer: evaluating and testing unintended memorization in neural networks, 28th USENIX Security Symposium (USENIX Security 19), pp. 267-284, (2019)