PFLF: Privacy-Preserving Federated Learning Framework for Edge Computing

被引:53
作者
Zhou, Hao [1 ,2 ]
Yang, Geng [1 ,2 ]
Dai, Hua [1 ,2 ]
Liu, Guoxiu [3 ,4 ]
机构
[1] Nanjing Univ Post & Telecommun, Sch Comp Sci, Nanjing 210023, Peoples R China
[2] Jiangsu Secur & Intelligent Proc Lab Big Data, Nanjing 210023, Peoples R China
[3] Jinling Inst Technol, Sch Network & Commun Engn, Wuhu 240002, Anhui, Peoples R China
[4] Anhui Prov Key Lab Network & Informat Secur, Wuhu 240002, Anhui, Peoples R China
基金
中国国家自然科学基金;
关键词
Privacy; Servers; Convergence; Training; Collaborative work; Edge computing; Computational modeling; Federated learning; differential privacy; convergence performance; information leakage; edge computing; COMMUNICATION; CHALLENGES;
D O I
10.1109/TIFS.2022.3174394
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Federated learning (FL) can protect clients' privacy from leakage in distributed machine learning. Applying federated learning to edge computing can protect the privacy of edge clients and benefit edge computing. Nevertheless, eavesdroppers can analyze the parameter information to specify clients' private information and model features. And it is difficult to achieve a high privacy level, convergence, and low communication overhead during the entire process in the FL framework. In this paper, we propose a novel privacy-preserving federated learning framework for edge computing (PFLF). In PFLF, each client and the application server add noise before sending the data. To protect the privacy of clients, we design a flexible arrangement mechanism to count the optimal training times for clients. We prove that PFLF guarantees the privacy of clients and servers during the entire training process. Then, we theoretically prove that PFLF has three main properties: 1) For a given privacy level and model aggregation times, there is an optimal number of participating times for clients; 2) There is an upper and lower bound of convergence; 3) PFLF achieves low communication overhead by designing a flexible participation training mechanism. Simulation experiments confirm the correctness of our theoretical analysis. Therefore, PFLF helps design a framework to balance privacy levels and convergence and achieve low communication overhead when there is a part of clients dropping out of training.
引用
收藏
页码:1905 / 1918
页数:14
相关论文
共 46 条
[21]   Differentially Private Asynchronous Federated Learning for Mobile Edge Computing in Urban Informatics [J].
Lu, Yunlong ;
Huang, Xiaohong ;
Dai, Yueyue ;
Maharjan, Sabita ;
Zhang, Yan .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2020, 16 (03) :2134-2143
[22]   On Safeguarding Privacy and Security in the Framework of Federated Learning [J].
Ma, Chuan ;
Li, Jun ;
Ding, Ming ;
Yang, Howard H. ;
Shu, Feng ;
Quek, Tony Q. S. ;
Poor, H. Vincent .
IEEE NETWORK, 2020, 34 (04) :242-248
[23]   High-Reliability and Low-Latency Wireless Communication for Internet of Things: Challenges, Fundamentals, and Enabling Technologies [J].
Ma, Zheng ;
Xiao, Ming ;
Xiao, Yue ;
Pang, Zhibo ;
Poor, H. Vincent ;
Vucetic, Branka .
IEEE INTERNET OF THINGS JOURNAL, 2019, 6 (05) :7946-7970
[24]  
McMahan H. B., 2016, P INT C ART INT STAT
[25]  
McMahan H Brendan, 2017, ARXIV171006963
[26]  
Mohammad U., 2019, ARXIV 190501656
[27]  
Pilanawithana B, 2016, 2016 AUSTRALIAN COMMUNICATIONS THEORY WORKSHOP (AUSCTW), P153, DOI 10.1109/AusCTW.2016.7433666
[28]  
Ryffel T., 2018, P PRIV PRES MACH LEA
[29]   Privacy-Preserving Deep Learning [J].
Shokri, Reza ;
Shmatikov, Vitaly .
CCS'15: PROCEEDINGS OF THE 22ND ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2015, :1310-1321
[30]  
Tran NH, 2019, IEEE INFOCOM SER, P1387, DOI [10.1109/INFOCOM.2019.8737464, 10.1109/infocom.2019.8737464]