User-Level Privacy-Preserving Federated Learning: Analysis and Performance Optimization

被引:149
作者
Wei, Kang [1 ]
Li, Jun [1 ]
Ding, Ming [2 ]
Ma, Chuan [1 ]
Su, Hang [3 ]
Zhang, Bo [3 ]
Poor, H. Vincent [4 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Elect & Opt Engn, Nanjing 210094, Peoples R China
[2] CSIRO, Data61, Sydney, ACT 2601, Australia
[3] Tsinghua Univ, Dept Comp Sci & Technol, Beijing 100084, Peoples R China
[4] Princeton Univ, Dept Elect Engn, Princeton, NJ 08544 USA
基金
美国国家科学基金会; 中国国家自然科学基金;
关键词
Federated learning; differential privacy; communication round; mobile edge computing; ATTACKS;
D O I
10.1109/TMC.2021.3056991
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL), as a type of collaborative machine learning framework, is capable of preserving private data from mobile terminals (MTs) while training the data into useful models. Nevertheless, from a viewpoint of information theory, it is still possible for a curious server to infer private information from the shared models uploaded by MTs. To address this problem, we first make use of the concept of local differential privacy (LDP), and propose a user-level differential privacy (UDP) algorithm by adding artificial noise to the shared models before uploading them to servers. According to our analysis, the UDP framework can realize (is an element of(i), delta(i))-LDP for the ith MT with adjustable privacy protection levels by varying the variances of the artificial noise processes. We then derive a theoretical convergence upper-bound for the UDP algorithm. It reveals that there exists an optimal number of communication rounds to achieve the best learning performance. More importantly, we propose a communication rounds discounting (CRD) method. Compared with the heuristic search method, the proposed CRD method can achieve a much better trade-off between the computational complexity of searching and the convergence performance. Extensive experiments indicate that our UDP algorithm using the proposed CRD method can effectively improve both the training efficiency and model quality for the given privacy protection levels.
引用
收藏
页码:3388 / 3401
页数:14
相关论文
共 29 条
  • [1] Deep Learning with Differential Privacy
    Abadi, Martin
    Chu, Andy
    Goodfellow, Ian
    McMahan, H. Brendan
    Mironov, Ilya
    Talwar, Kunal
    Zhang, Li
    [J]. CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, : 308 - 318
  • [2] Wireless Powered Mobile Edge Computing: Dynamic Resource Allocation and Throughput Maximization
    Deng, Xiumei
    Li, Jun
    Shi, Long
    Wei, Zhiqiang
    Zhou, Xiaobo
    Yuan, Jinhong
    [J]. IEEE TRANSACTIONS ON MOBILE COMPUTING, 2022, 21 (06) : 2271 - 2288
  • [3] Public key encryption with keyword search secure against keyword guessing attacks without random oracle
    Fang, Liming
    Susilo, Willy
    Ge, Chunpeng
    Wang, Jiandong
    [J]. INFORMATION SCIENCES, 2013, 238 : 221 - 241
  • [4] Secure Keyword Search and Data Sharing Mechanism for Cloud Computing
    Ge, Chunpeng
    Susilo, Willy
    Liu, Zhe
    Xia, Jinyue
    Szalachowski, Pawel
    Fang Liming
    [J]. IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2021, 18 (06) : 2787 - 2800
  • [5] Geyer R. C., 2017, arXiv
  • [6] Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672
  • [7] DP-ADMM: ADMM-Based Distributed Learning With Differential Privacy
    Huang, Zonghao
    Hu, Rui
    Guo, Yuanxiong
    Chan-Tin, Eric
    Gong, Yanmin
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2020, 15 : 1002 - 1012
  • [8] Konečny J, 2017, Arxiv, DOI arXiv:1610.05492
  • [9] Gradient-based learning applied to document recognition
    Lecun, Y
    Bottou, L
    Bengio, Y
    Haffner, P
    [J]. PROCEEDINGS OF THE IEEE, 1998, 86 (11) : 2278 - 2324
  • [10] Concentrated Differentially Private Gradient Descent with Adaptive per-Iteration Privacy Budget
    Lee, Jaewoo
    Kifer, Daniel
    [J]. KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2018, : 1656 - 1665