Integrally Private Model Selection for Deep Neural Networks

被引:1
作者
Varshney, Ayush K. [1 ]
Torra, Vicenc [1 ]
机构
[1] Umea Univ, Dept Comp Sci, S-90740 Umea, Sweden
来源
DATABASE AND EXPERT SYSTEMS APPLICATIONS, DEXA 2023, PT II | 2023年 / 14147卷
关键词
Data privacy; Integral privacy; Deep neural networks; Privacy-preserving ML;
D O I
10.1007/978-3-031-39821-6_33
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Deep neural networks (DNNs) are one of the most widely used machine learning algorithms. In the literature, most of the privacy related work to DNNs focus on adding perturbations to avoid attacks in the output which can lead to significant utility loss. Large number of weights and biases in DNNs can result in a unique model for each set of training data. In this case, an adversary can perform model comparison attacks which lead to the disclosure of the training data. In our work, we first introduce the model comparison attack for DNNs which accounts for the permutation of nodes in a layer. To overcome this, we introduce a relaxed notion of integral privacy called epsilon-integral privacy. We further provide a methodology for recommending epsilon-Integrally private models. We use a data-centric approach to generate subsamples which have the same class-distribution as the original data. We have experimented with 6 datasets of varied sizes (10k to 7 million instances) and our experimental results show that our recommended private models achieve benchmark comparable utility. We also achieve benchmark comparable test accuracy for 4 different DNN architectures. The results from our methodology show superiority under comparison with three different levels of differential privacy.
引用
收藏
页码:408 / 422
页数:15
相关论文
共 25 条
[1]   HINTS [J].
ABUMOSTAFA, YS .
NEURAL COMPUTATION, 1995, 7 (04) :639-671
[2]  
[Anonymous], 2017, NAT DIAB STAT REP
[3]  
ANTHONY M, 2009, Neural network learning: theoretical foundations, DOI [DOI 10.1017/CBO9780511624216, 10.1017/CBO9780511624216]
[4]  
Baum E., 1988, Advances in Neural Information Processing Systems, V1
[5]  
Centers for Disease Control Prevention, 2015, National diabetes statistics report,2017
[6]  
Dua D, 2017, UCI MACHINE LEARNING
[7]  
Dwork C, 2006, LECT NOTES COMPUT SC, V4052, P1
[8]   Calibrating noise to sensitivity in private data analysis [J].
Dwork, Cynthia ;
McSherry, Frank ;
Nissim, Kobbi ;
Smith, Adam .
THEORY OF CRYPTOGRAPHY, PROCEEDINGS, 2006, 3876 :265-284
[9]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[10]  
Ji ZL, 2014, Arxiv, DOI [arXiv:1412.7584, DOI 10.48550/ARXIV.1412.7584]