Privacy-Preserving Federated Deep Learning With Irregular Users

被引:105
作者
Xu, Guowen [1 ,2 ]
Li, Hongwei [1 ,2 ]
Zhang, Yun [1 ]
Xu, Shengmin [3 ]
Ning, Jianting [3 ,4 ]
Deng, Robert H. [3 ]
机构
[1] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Chengdu 611731, Peoples R China
[2] Cyberspace Secur Res Ctr, Peng Cheng Lab, Shenzhen 518000, Peoples R China
[3] Singapore Management Univ, Sch Informat Syst, Singapore 178902, Singapore
[4] Fujian Normal Univ, Coll Math & Informat, Fuzhou 350007, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Training; Servers; Deep learning; Privacy; Cryptography; Neural networks; Privacy protection; federated learning; cloud computing; ACCESS-CONTROL; EFFICIENT; SECURITY; AWARE;
D O I
10.1109/TDSC.2020.3005909
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated deep learning has been widely used in various fields. To protect data privacy, many privacy-preservingapproaches have been designed and implemented in various scenarios. However, existing works rarely consider a fundamental issue that the data shared by certain users (called irregular users) may be of low quality. Obviously, in a federated training process, data shared by many irregular users may impair the training accuracy, or worse, lead to the uselessness of the final model. In this article, we propose PPFDL, a Privacy-Preserving Federated Deep Learning framework with irregular users. In specific, we design a novel solution to reduce the negative impact of irregular users on the training accuracy, which guarantees that the training results are mainly calculated from the contribution of high-quality data. Meanwhile, we exploit Yao's garbled circuits and additively homomorphic cryptosystems to ensure the confidentiality of all user-related information. Moreover, PPFDL is also robust to users dropping out during the whole implementation. This means that each user can be offline at any subprocess of training, as long as the remaining online users can still complete the training task. Extensive experiments demonstrate the superior performance of PPFDL in terms of training accuracy, computation, and communication overheads.
引用
收藏
页码:1364 / 1381
页数:18
相关论文
共 57 条
  • [1] Deep Learning with Differential Privacy
    Abadi, Martin
    Chu, Andy
    Goodfellow, Ian
    McMahan, H. Brendan
    Mironov, Ilya
    Talwar, Kunal
    Zhang, Li
    [J]. CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, : 308 - 318
  • [2] Practical Secure Aggregation for Privacy-Preserving Machine Learning
    Bonawitz, Keith
    Ivanov, Vladimir
    Kreuter, Ben
    Marcedone, Antonio
    McMahan, H. Brendan
    Patel, Sarvar
    Ramage, Daniel
    Segal, Aaron
    Seth, Karn
    [J]. CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, : 1175 - 1191
  • [3] Bos Joppe W., 2013, Cryptography and Coding. 14th IMA International Conference, IMACC 2013. Proceedings: LNCS 8308, P45, DOI 10.1007/978-3-642-45239-0_4
  • [4] Fully Homomorphic Encryption without Modulus Switching from Classical GapSVP
    Brakerski, Zvika
    [J]. ADVANCES IN CRYPTOLOGY - CRYPTO 2012, 2012, 7417 : 868 - 886
  • [5] Leveraging Crowdsensed Data Streams to Discover and Sell Knowledge: A Secure and Efficient Realization
    Cai, Chengjun
    Zheng, Yifeng
    Wang, Cong
    [J]. 2018 IEEE 38TH INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS), 2018, : 589 - 599
  • [6] Using Linearly-Homomorphic Encryption to Evaluate Degree-2 Functions on Encrypted Data
    Catalano, Dario
    Fiore, Dario
    [J]. CCS'15: PROCEEDINGS OF THE 22ND ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2015, : 1518 - 1529
  • [7] Chenoweth JM, 2016, FLA MUS NAT HIST-RIP, P1
  • [8] Coron JS, 2012, LECT NOTES COMPUT SC, V7237, P446, DOI 10.1007/978-3-642-29011-4_27
  • [9] Damgård I, 2001, LECT NOTES COMPUT SC, V1992, P119
  • [10] Dong Xin Luna, 2009, Proceedings of the VLDB Endowment, V2