PPeFL: Privacy-Preserving Edge Federated Learning With Local Differential Privacy

被引:39
作者
Wang, Baocang [1 ]
Chen, Yange [2 ]
Jiang, Hang [3 ]
Zhao, Zhen [4 ]
机构
[1] Xidian Univ, State Key Lab Integrated Serv Networks & Cryptog, Xian 710071, Peoples R China
[2] Xuchang Univ, Sch Informat Engn, Xuchang 461000, Peoples R China
[3] ZTE Corp, Network Intelligence Ctr, Xian 710071, Peoples R China
[4] Xidian Univ, State Key Lab Integrated Serv Networks, Xian 710071, Peoples R China
关键词
Privacy; Perturbation methods; Costs; Federated learning; Training; Servers; Cloud computing; Edge computing; federated learning (FL); local differential privacy (LDP); privacy protection;
D O I
10.1109/JIOT.2023.3264259
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Since traditional federated learning (FL) algorithms cannot provide sufficient privacy guarantees, an increasing number of approaches apply local differential privacy (LDP) techniques to FL to provide strict privacy guarantees. However, the privacy budget heavily increases proportionally with the dimension of the parameters, and the large variance generated by the perturbation mechanisms leads to poor performance of the final model. In this article, we propose a novel privacy-preserving edge FL framework based on LDP (PPeFL). Specifically, we present three LDP mechanisms to address the privacy problems in the FL process. The proposed filtering and screening with exponential mechanism (FS-EM) filters out the better parameters for global aggregation based on the contribution of weight parameters to the neural network. Thus, we can not only solve the problem of fast growth of privacy budget when applying perturbation mechanism locally but also greatly reduce the communication costs. In addition, the proposed data perturbation mechanism with stronger privacy (DPM-SP) allows a secondary scrambling of the original data of participants and can provide strong security. Further, a data perturbation mechanism with enhanced utility (DPM-EU) is proposed in order to reduce the variance introduced by the perturbation. Finally, extensive experiments are performed to illustrate that the PPeFL scheme is practical and efficient, providing stronger privacy protection while ensuring utility.
引用
收藏
页码:15488 / 15500
页数:13
相关论文
共 46 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]  
Alistarh D, 2018, ADV NEUR IN, V31
[3]  
[Anonymous], 2017, Applications and Techniques in Information Security, DOI DOI 10.1007/978-981-10-5421-1_9
[4]   Local Differential Privacy for Deep Learning [J].
Arachchige, Pathum Chamikara Mahawaga ;
Bertok, Peter ;
Khalil, Ibrahim ;
Liu, Dongxi ;
Camtepe, Seyit ;
Atiquzzaman, Mohammed .
IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (07) :5827-5842
[5]  
Brendan M.H., 2016, arXiv, DOI [10.48550/arXiv.1602.05629, DOI 10.48550/ARXIV.1602.05629]
[6]  
Chen JM, 2017, Arxiv, DOI arXiv:1604.00981
[7]   PDLHR: Privacy-Preserving Deep Learning Model With Homomorphic Re-Encryption in Robot System [J].
Chen, Yange ;
Wang, Baocang ;
Zhang, Zhili .
IEEE SYSTEMS JOURNAL, 2022, 16 (02) :2032-2043
[8]   Privacy-Preserving Multi-Class Support Vector Machine Model on Medical Diagnosis [J].
Chen, Yange ;
Mao, Qinyu ;
Wang, Baocang ;
Duan, Pu ;
Zhang, Benyu ;
Hong, Zhiyong .
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2022, 26 (07) :3342-3353
[9]  
Ding BL, 2017, ADV NEUR IN, V30
[10]   Minimax Optimal Procedures for Locally Private Estimation [J].
Duchi, John C. ;
Jordan, Michael I. ;
Wainwright, Martin J. .
JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2018, 113 (521) :182-201