Private and Secure Distributed Deep Learning: A Survey

被引:0
作者
Allaart, Corinne [1 ,2 ]
Amiri, Saba [3 ]
Bal, Henri [1 ]
Belloum, Adam [4 ]
Gommans, Leon [5 ]
van Halteren, Aart [1 ,6 ]
Klous, Sander [3 ]
机构
[1] Vrije Univ Amsterdam, Amsterdam, Netherlands
[2] St Antonius Hosp, Nieuwegein, Netherlands
[3] Univ Amsterdam, Amsterdam, Netherlands
[4] Univ Amsterdam, FNWI, Amsterdam, Netherlands
[5] Koninklijke Luchtvaart Maatschappij, Amsterdam, Netherlands
[6] Philips Res, Eindhoven, North Brabant, Netherlands
关键词
Deep learning; privacy; security; distributed learning; ENCRYPTION;
D O I
10.1145/3703452
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Traditionally, deep learning practitioners would bring data into a central repository for model training and inference. Recent developments in distributed learning, such as federated learning and deep learning as a service (DLaaS), do not require centralized data and instead push computing to where the distributed datasets reside. These decentralized training schemes, however, introduce additional security and privacy challenges. This survey first structures the field of distributed learning into two main paradigms and then provides an overview of the recently published protective measures for each. This work highlights both secure training methods as well as private inference measures. Our analyses show that recent publications, while being highly dependent on the problem definition, report progress in terms of security, privacy, and efficiency. Nevertheless, we also identify several current issues within the private and secure distributed deep learning (PSDDL) field that require more research. We discuss these issues and provide a general overview of how they might be resolved.
引用
收藏
页数:43
相关论文
共 160 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]   QUOTIENT: Two-Party Secure Neural Network Training and Prediction [J].
Agrawal, Nitin ;
Shamsabadi, Ali Shahin ;
Kusner, Matt J. ;
Gascon, Adria .
PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, :1231-1247
[3]   Towards the AlexNet Moment for Homomorphic Encryption: HCNN, the First Homomorphic CNN on Encrypted Data With GPUs [J].
Al Badawi, Ahmad ;
Jin, Chao ;
Lin, Jie ;
Mun, Chan Fook ;
Jie, Sim Jun ;
Tan, Benjamin Hong Meng ;
Nan, Xiao ;
Aung, Khin Mi Mi ;
Chandrasekhar, Vijay Ramaseshan .
IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTING, 2021, 9 (03) :1330-1343
[4]  
Albrecht Martin, 2016, Paper 2016/127
[5]  
[Anonymous], 2013, Cryptology ePrint Archive, Paper 2013/552, DOI DOI 10.1145/2508859.2516738
[6]  
[Anonymous], 2019, Text Generation with an RNN
[7]  
[Anonymous], 2009, FULLY HOMOMORPHIC EN
[8]   Privacy preservation in Distributed Deep Learning: A survey on Distributed Deep Learning, privacy preservation techniques used and interesting research directions [J].
Antwi-Boasiako, Emmanuel ;
Zhou, Shijie ;
Liao, Yongjian ;
Liu, Qihe ;
Wang, Yuyu ;
Owusu-Agyemang, Kwabena .
JOURNAL OF INFORMATION SECURITY AND APPLICATIONS, 2021, 61
[9]  
Aslett LJM, 2015, Arxiv, DOI [arXiv:1508.06845, 10.48550/ARXIV.1508.06845, DOI 10.48550/ARXIV.1508.06845]
[10]   The Privacy Blanket of the Shuffle Model [J].
Balle, Borja ;
Bell, James ;
Gascon, Adria ;
Nissim, Kobbi .
ADVANCES IN CRYPTOLOGY - CRYPTO 2019, PT II, 2019, 11693 :638-667