PFLF: Privacy-Preserving Federated Learning Framework for Edge Computing

被引:52
作者
Zhou, Hao [1 ,2 ]
Yang, Geng [1 ,2 ]
Dai, Hua [1 ,2 ]
Liu, Guoxiu [3 ,4 ]
机构
[1] Nanjing Univ Post & Telecommun, Sch Comp Sci, Nanjing 210023, Peoples R China
[2] Jiangsu Secur & Intelligent Proc Lab Big Data, Nanjing 210023, Peoples R China
[3] Jinling Inst Technol, Sch Network & Commun Engn, Wuhu 240002, Anhui, Peoples R China
[4] Anhui Prov Key Lab Network & Informat Secur, Wuhu 240002, Anhui, Peoples R China
基金
中国国家自然科学基金;
关键词
Privacy; Servers; Convergence; Training; Collaborative work; Edge computing; Computational modeling; Federated learning; differential privacy; convergence performance; information leakage; edge computing; COMMUNICATION; CHALLENGES;
D O I
10.1109/TIFS.2022.3174394
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Federated learning (FL) can protect clients' privacy from leakage in distributed machine learning. Applying federated learning to edge computing can protect the privacy of edge clients and benefit edge computing. Nevertheless, eavesdroppers can analyze the parameter information to specify clients' private information and model features. And it is difficult to achieve a high privacy level, convergence, and low communication overhead during the entire process in the FL framework. In this paper, we propose a novel privacy-preserving federated learning framework for edge computing (PFLF). In PFLF, each client and the application server add noise before sending the data. To protect the privacy of clients, we design a flexible arrangement mechanism to count the optimal training times for clients. We prove that PFLF guarantees the privacy of clients and servers during the entire training process. Then, we theoretically prove that PFLF has three main properties: 1) For a given privacy level and model aggregation times, there is an optimal number of participating times for clients; 2) There is an upper and lower bound of convergence; 3) PFLF achieves low communication overhead by designing a flexible participation training mechanism. Simulation experiments confirm the correctness of our theoretical analysis. Therefore, PFLF helps design a framework to balance privacy levels and convergence and achieve low communication overhead when there is a part of clients dropping out of training.
引用
收藏
页码:1905 / 1918
页数:14
相关论文
共 46 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]  
Agarwal A, 2012, IEEE DECIS CONTR P, P5451, DOI 10.1109/CDC.2012.6426626
[3]  
Blum A., 2005, P 24 ACM SIGMOD SIGA, P128, DOI [DOI 10.1145/1065167.1065184, 10.1145/1065167.1065184]
[4]   Performance Analysis of WMNs Using Hill Climbing Algorithm Considering Normal and Uniform Distribution of Mesh Clients [J].
Chang, Xinyue ;
Oda, Tetsuya ;
Spaho, Evjola ;
Ikeda, Makoto ;
Barolli, Leonard ;
Xhafa, Fatos .
2013 SEVENTH INTERNATIONAL CONFERENCE ON COMPLEX, INTELLIGENT, AND SOFTWARE INTENSIVE SYSTEMS (CISIS), 2013, :424-427
[5]   Local Privacy and Statistical Minimax Rates [J].
Duchi, John C. ;
Jordan, Michael I. ;
Wainwright, Martin J. .
2013 IEEE 54TH ANNUAL SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS), 2013, :429-438
[6]   The Algorithmic Foundations of Differential Privacy [J].
Dwork, Cynthia ;
Roth, Aaron .
FOUNDATIONS AND TRENDS IN THEORETICAL COMPUTER SCIENCE, 2013, 9 (3-4) :211-406
[7]   RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response [J].
Erlingsson, Ulfar ;
Pihur, Vasyl ;
Korolova, Aleksandra .
CCS'14: PROCEEDINGS OF THE 21ST ACM CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2014, :1054-1067
[8]   Distribution of the Number of Users per Base Station in Cellular Networks [J].
George, Geordie ;
Lozano, Angel ;
Haenggi, Martin .
IEEE WIRELESS COMMUNICATIONS LETTERS, 2019, 8 (02) :520-523
[9]  
Geyer RC., 2017, ABS171207557 CORR
[10]  
Hao M, 2019, IEEE ICC