EPPS: Efficient Privacy-Preserving Scheme in Distributed Deep Learning

被引:2
作者
Li, Yiran [1 ,2 ]
Li, Hongwei [1 ,2 ]
Xu, Guowen [1 ]
Liu, Sen [1 ]
Lu, Rongxing [3 ]
机构
[1] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Chengdu, Peoples R China
[2] Peng Cheng Lab, Cyberspace Secur Res Ctr, Shenzhen, Peoples R China
[3] Univ New Brunswick, Fac Comp Sci, Fredericton, NB E3B 5A3, Canada
来源
2019 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM) | 2019年
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Privacy-Preserving; Distributed Deep Learning; Multiple Keys;
D O I
10.1109/globecom38437.2019.9013395
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
As a promising training model with Neural Network, distributed deep learning has been widely applied in various scenarios, where clients and the cloud server work together only by sharing local gradients and global parameters. However, research has shown that the adversary can still reconstruct the users' private information even if little information is leaked. To address this problem, several approaches of privacy-preserving distributed training have been exploited with existing mature technologies, such as Differential Privacy, Secure Multi-party Computation and Homomorphic Encryption. However, state-of-the-art results are still defective in security, functionality and efficiency. In this paper, we propose an Efficient Privacy-Preserving Scheme (EPPS) for distributed deep learning. We claim that our solution achieves the best performance tradeoff between security, efficiency and functionality. Specifically, we adopt the threshold Paillier encryption as the underlying structure to construct our secure training model. Hence, the confidentiality of honest users' of local gradients can be guaranteed, even the cloud server colluding with multiple users. In addition, since users are often accidentally offline due to either network environment or equipment damage, our EPPS can also support users exiting at any phases of the entire work process. Further more, we conducted extensive experiments on real-world data to demonstrate the preferable performance of our proposed scheme.
引用
收藏
页数:6
相关论文
共 12 条
[1]  
[Anonymous], 2019, IEEE T INFORM FORENS
[2]   Practical Secure Aggregation for Privacy-Preserving Machine Learning [J].
Bonawitz, Keith ;
Ivanov, Vladimir ;
Kreuter, Ben ;
Marcedone, Antonio ;
McMahan, H. Brendan ;
Patel, Sarvar ;
Ramage, Daniel ;
Segal, Aaron ;
Seth, Karn .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :1175-1191
[3]  
Damgård I, 2001, LECT NOTES COMPUT SC, V1992, P119
[4]   Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning [J].
Hitaj, Briland ;
Ateniese, Giuseppe ;
Perez-Cruz, Fernando .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :603-618
[5]   PTAS: Privacy-preserving Thin-client Authentication Scheme in blockchain-based PKI [J].
Jiang, Wenbo ;
Li, Hongwei ;
Xu, Guowen ;
Wen, Mi ;
Dong, Guishan ;
Lin, Xiaodong .
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2019, 96 :185-195
[6]   Trojaning Attack on Neural Networks [J].
Liu, Yingqi ;
Ma, Shiqing ;
Aafer, Yousra ;
Lee, Wen-Chuan ;
Zhai, Juan ;
Wang, Weihang ;
Zhang, Xiangyu .
25TH ANNUAL NETWORK AND DISTRIBUTED SYSTEM SECURITY SYMPOSIUM (NDSS 2018), 2018,
[7]  
Phan N, 2016, AAAI CONF ARTIF INTE, P1309
[8]   Privacy-Preserving Deep Learning via Additively Homomorphic Encryption [J].
Phong, Le Trieu ;
Aono, Yoshinori ;
Hayashi, Takuya ;
Wang, Lihua ;
Moriai, Shiho .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2018, 13 (05) :1333-1345
[9]  
Recht Benjamin, 2011, Advances in neural information processing systems, P693
[10]  
Shokri R, 2015, ANN ALLERTON CONF, P909, DOI 10.1109/ALLERTON.2015.7447103