User-Level Privacy-Preserving Federated Learning: Analysis and Performance Optimization

被引:168
作者
Wei, Kang [1 ]
Li, Jun [1 ]
Ding, Ming [2 ]
Ma, Chuan [1 ]
Su, Hang [3 ]
Zhang, Bo [3 ]
Poor, H. Vincent [4 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Elect & Opt Engn, Nanjing 210094, Peoples R China
[2] CSIRO, Data61, Sydney, ACT 2601, Australia
[3] Tsinghua Univ, Dept Comp Sci & Technol, Beijing 100084, Peoples R China
[4] Princeton Univ, Dept Elect Engn, Princeton, NJ 08544 USA
基金
美国国家科学基金会; 中国国家自然科学基金;
关键词
Federated learning; differential privacy; communication round; mobile edge computing; ATTACKS;
D O I
10.1109/TMC.2021.3056991
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL), as a type of collaborative machine learning framework, is capable of preserving private data from mobile terminals (MTs) while training the data into useful models. Nevertheless, from a viewpoint of information theory, it is still possible for a curious server to infer private information from the shared models uploaded by MTs. To address this problem, we first make use of the concept of local differential privacy (LDP), and propose a user-level differential privacy (UDP) algorithm by adding artificial noise to the shared models before uploading them to servers. According to our analysis, the UDP framework can realize (is an element of(i), delta(i))-LDP for the ith MT with adjustable privacy protection levels by varying the variances of the artificial noise processes. We then derive a theoretical convergence upper-bound for the UDP algorithm. It reveals that there exists an optimal number of communication rounds to achieve the best learning performance. More importantly, we propose a communication rounds discounting (CRD) method. Compared with the heuristic search method, the proposed CRD method can achieve a much better trade-off between the computational complexity of searching and the convergence performance. Extensive experiments indicate that our UDP algorithm using the proposed CRD method can effectively improve both the training efficiency and model quality for the given privacy protection levels.
引用
收藏
页码:3388 / 3401
页数:14
相关论文
共 29 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]  
[Anonymous], 2017, Applications and Techniques in Information Security, DOI DOI 10.1007/978-981-10-5421-1_9
[3]   Wireless Powered Mobile Edge Computing: Dynamic Resource Allocation and Throughput Maximization [J].
Deng, Xiumei ;
Li, Jun ;
Shi, Long ;
Wei, Zhiqiang ;
Zhou, Xiaobo ;
Yuan, Jinhong .
IEEE TRANSACTIONS ON MOBILE COMPUTING, 2022, 21 (06) :2271-2288
[4]   Public key encryption with keyword search secure against keyword guessing attacks without random oracle [J].
Fang, Liming ;
Susilo, Willy ;
Ge, Chunpeng ;
Wang, Jiandong .
INFORMATION SCIENCES, 2013, 238 :221-241
[5]   Secure Keyword Search and Data Sharing Mechanism for Cloud Computing [J].
Ge, Chunpeng ;
Susilo, Willy ;
Liu, Zhe ;
Xia, Jinyue ;
Szalachowski, Pawel ;
Fang Liming .
IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2021, 18 (06) :2787-2800
[6]  
Geyer R. C., 2017, arXiv
[7]  
Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672
[8]   DP-ADMM: ADMM-Based Distributed Learning With Differential Privacy [J].
Huang, Zonghao ;
Hu, Rui ;
Guo, Yuanxiong ;
Chan-Tin, Eric ;
Gong, Yanmin .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2020, 15 :1002-1012
[9]  
Konečny J, 2017, Arxiv, DOI arXiv:1610.05492
[10]   Gradient-based learning applied to document recognition [J].
Lecun, Y ;
Bottou, L ;
Bengio, Y ;
Haffner, P .
PROCEEDINGS OF THE IEEE, 1998, 86 (11) :2278-2324