Competitor Attack Model for Privacy-Preserving Deep Learning

被引:0
作者
Zhao, Dongdong [1 ]
Liao, Songsong [2 ]
Li, Huanhuan [3 ]
Xiang, Jianwen [2 ]
机构
[1] Wuhan Univ Technol, Chongqing Res Inst, Sch Comp Sci & Artif Intelligence, Wuhan, Peoples R China
[2] Wuhan Univ Technol, Sch Comp Sci & Artif Intelligence, Wuhan, Peoples R China
[3] China Univ Geosci, Wuhan, Peoples R China
来源
2023 IEEE/ACM 23RD INTERNATIONAL SYMPOSIUM ON CLUSTER, CLOUD AND INTERNET COMPUTING WORKSHOPS, CCGRIDW | 2023年
基金
中国国家自然科学基金;
关键词
deep learning; data security; privacy protection; attack model;
D O I
10.1109/CCGridW59191.2023.00034
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Since deep learning models usually handle a large amount of data, the ensuing problems of privacy leakage have attracted more and more attention. Although various privacy-preserving deep learning (PPDL) methods have been proposed, these methods may still have the risk of privacy leakage in some cases. To better investigate the security of existing PPDL methods, we establish a new attack model, which may be exploited by the competitors of the owners of private data. Specifically, we assume that the competitor has some data that belongs to the same domain as the private data of the other party, and he applies the same PPDL methods to his data as the other party to obtain the perturbed data, and then he trains a model to inverse the perturbed data. Data perturbation-based PPDL methods are selected in four scenarios and their security against the proposed competitor attack model (CAM) is investigated. The experimental results on three public datasets, i.e. MNIST, CIFAR10 and LFW, demonstrate that the selected methods tend to be vulnerable to CAM. On average, the recognition accuracy for the images reconstructed by CAM is about 10% lower than that for the original images, and PSNR is more than 15. The outline of the image and other information can be seen by the naked eye.
引用
收藏
页码:133 / 140
页数:8
相关论文
共 27 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]  
Baldi P., 2011, Proceedings of the 2011 International Conference on Unsupervised and Transfer Learning Workshop - Volume 27. UTLW'11, P37
[3]  
STUDIO
[4]   Deep learning-enabled medical computer vision [J].
Esteva, Andre ;
Chou, Katherine ;
Yeung, Serena ;
Naik, Nikhil ;
Madani, Ali ;
Mottaghi, Ali ;
Liu, Yun ;
Topol, Eric ;
Dean, Jeff ;
Socher, Richard .
NPJ DIGITAL MEDICINE, 2021, 4 (01)
[5]   Image Pixelization with Differential Privacy [J].
Fan, Liyue .
DATA AND APPLICATIONS SECURITY AND PRIVACY XXXII, DBSEC 2018, 2018, 10980 :148-162
[6]   Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures [J].
Fredrikson, Matt ;
Jha, Somesh ;
Ristenpart, Thomas .
CCS'15: PROCEEDINGS OF THE 22ND ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2015, :1322-1333
[7]  
Geiping J, 2020, 34 C NEURAL INFORM P, V33
[8]   Fully Homomorphic Encryption Using Ideal Lattices [J].
Gentry, Craig .
STOC'09: PROCEEDINGS OF THE 2009 ACM SYMPOSIUM ON THEORY OF COMPUTING, 2009, :169-178
[9]  
Hore Alain, 2010, Proceedings of the 2010 20th International Conference on Pattern Recognition (ICPR 2010), P2366, DOI 10.1109/ICPR.2010.579
[10]  
Huang GaryB., 2014, Labeled Faces in the Wild: Updates and New Reporting Procedures