Protecting Data Privacy in Federated Learning Combining Differential Privacy and Weak Encryption

被引:4
作者
Wang, Chuanyin [1 ,2 ]
Ma, Cunqing [1 ]
Li, Min [1 ,2 ]
Gao, Neng [1 ]
Zhang, Yifei [1 ]
Shen, Zhuoxiang [1 ,2 ]
机构
[1] Chinese Acad Sci, Inst Informat Engn, State Key Lab Informat Secur, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Sch Cyber Secur, Beijing, Peoples R China
来源
SCIENCE OF CYBER SECURITY, SCISEC 2021 | 2021年 / 13005卷
关键词
Federated learning; Privacy; Differential privacy; Weak encryption;
D O I
10.1007/978-3-030-89137-4_7
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
As a typical application of decentralization, federated learning prevents privacy leakage of crowdsourcing data for various training tasks. Instead of transmitting actual data, federated learning only updates model parameters of server by learning multiple sub-models from clients. However, these parameters may be leaked during transmission and further used by attackers to restore client data. Existing technologies used to protect parameters from privacy leakage do not achieve the sufficient protection of parameter information. In this paper, we propose a novel and efficient privacy protection method, which perturbs the privacy information contained in the parameters and completes its ciphertext representation in transmission. Regarding to the perturbation part, differential privacy is utilized to perturb the real parameters, which can minimize the privacy information contained in the parameters. To further camouflage the parameters, the weak encryption keeps the cipher-text form of the parameters as they are transmitted from the client to the server. As a result, neither the server nor any middle attacker can obtain the real information of the parameter directly. The experiments show that our method effectively resists attacks from both malicious clients and malicious server.
引用
收藏
页码:95 / 109
页数:15
相关论文
共 30 条
[1]  
Agrawal R, 2000, SIGMOD REC, V29, P439, DOI 10.1145/335191.335438
[2]  
Ammad-ud-din M, 2019, Arxiv, DOI arXiv:1901.09888
[3]   Scalable and Secure Logistic Regression via Homomorphic Encryption [J].
Aono, Yoshinori ;
Hayashi, Takuya ;
Le Trieu Phong ;
Wang, Lihua .
CODASPY'16: PROCEEDINGS OF THE SIXTH ACM CONFERENCE ON DATA AND APPLICATION SECURITY AND PRIVACY, 2016, :142-144
[4]  
Augenstein S, 2020, Arxiv, DOI arXiv:1911.06679
[5]  
Bhowmick A, 2019, Arxiv, DOI arXiv:1812.00984
[6]   Practical Secure Aggregation for Privacy-Preserving Machine Learning [J].
Bonawitz, Keith ;
Ivanov, Vladimir ;
Kreuter, Ben ;
Marcedone, Antonio ;
McMahan, H. Brendan ;
Patel, Sarvar ;
Ramage, Daniel ;
Segal, Aaron ;
Seth, Karn .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :1175-1191
[7]  
Canetti R., 1996, Proceedings of the Twenty-Eighth Annual ACM Symposium on the Theory of Computing, P639, DOI 10.1145/237814.238015
[8]   StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation [J].
Choi, Yunjey ;
Choi, Minje ;
Kim, Munyoung ;
Ha, Jung-Woo ;
Kim, Sunghun ;
Choo, Jaegul .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :8789-8797
[9]  
Du WL, 2004, SIAM PROC S, P222
[10]   Differential privacy: A survey of results [J].
Dwork, Cynthia .
THEORY AND APPLICATIONS OF MODELS OF COMPUTATION, PROCEEDINGS, 2008, 4978 :1-19