Privacy-Preserving Deep Learning via Weight Transmission

被引:78
作者
Le Trieu Phong [1 ]
Tran Thi Phuong [1 ,2 ,3 ]
机构
[1] Natl Inst Informat & Commun Technol NICT, Koganei, Tokyo 1848795, Japan
[2] Ton Duc Thang Univ, Fac Math & Stat, Ho Chi Minh City, Vietnam
[3] Meiji Univ, Designated Res Projects Unit, Kawasaki, Kanagawa 2148571, Japan
关键词
Privacy preservation; stochastic gradient descent; distributed trainers; neural networks;
D O I
10.1109/TIFS.2019.2911169
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
This paper considers the scenario that multiple data owners wish to apply a machine learning method over the combined dataset of all owners to obtain the best possible learning output but do not want to share the local datasets owing to privacy concerns. We design systems for the scenario that the stochastic gradient descent (SGD) algorithm is used as the machine learning method, because SGD (or its variants) is at the heart of recent deep learning techniques over neural networks. Our systems differ from the existing systems in the following features: 1) any activation function can be used, meaning that no privacy-preserving-friendly approximation is required; 2) gradients computed by SGD are not shared but the weight parameters are shared instead; and 3) robustness against colluding parties even in the extreme case that only one honest party exists. One of our systems requires a shared symmetric key among the data owners (trainers) to ensure the secrecy of the weight parameters against a central server. We prove that our systems, while privacy preserving, achieve the same learning accuracy as SGD and, hence, retain the merit of deep learning with respect to accuracy. Finally, we conduct several experiments using benchmark datasets and show that our systems outperform the previous system in terms of learning accuracies.
引用
收藏
页码:3003 / 3015
页数:13
相关论文
共 38 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]  
[Anonymous], 2004, FDN CRYPTOGRAPHY BAS
[3]  
[Anonymous], 2010, J PRIVACY CONFIDENTI
[4]   Scalable and Secure Logistic Regression via Homomorphic Encryption [J].
Aono, Yoshinori ;
Hayashi, Takuya ;
Le Trieu Phong ;
Wang, Lihua .
CODASPY'16: PROCEEDINGS OF THE SIXTH ACM CONFERENCE ON DATA AND APPLICATION SECURITY AND PRIVACY, 2016, :142-144
[5]   Privacy-Preserving Logistic Regression with Distributed Data Sources via Homomorphic Encryption [J].
Aono, Yoshinori ;
Hayashi, Takuya ;
Phong, Le Trieu ;
Wang, Lihua .
IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2016, E99D (08) :2079-2089
[6]   Private Empirical Risk Minimization: Efficient Algorithms and Tight Error Bounds [J].
Bassily, Raef ;
Smith, Adam ;
Thakurta, Abhradeep .
2014 55TH ANNUAL IEEE SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS 2014), 2014, :464-473
[7]   Practical Secure Aggregation for Privacy-Preserving Machine Learning [J].
Bonawitz, Keith ;
Ivanov, Vladimir ;
Kreuter, Ben ;
Marcedone, Antonio ;
McMahan, H. Brendan ;
Patel, Sarvar ;
Ramage, Daniel ;
Segal, Aaron ;
Seth, Karn .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :1175-1191
[8]   Distributed deep learning networks among institutions for medical imaging [J].
Chang, Ken ;
Balachandar, Niranjan ;
Lam, Carson ;
Yi, Darvin ;
Brown, James ;
Beers, Andrew ;
Rosen, Bruce ;
Rubin, Daniel L. ;
Kalpathy-Cramer, Jayashree .
JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, 2018, 25 (08) :945-954
[9]  
Danezis G., 2008, A Survey of Anonymous Communication Channels
[10]  
Dowlin N, 2016, PR MACH LEARN RES, V48