On Safeguarding Privacy and Security in the Framework of Federated Learning

被引:172
作者
Ma, Chuan [1 ]
Li, Jun [1 ,2 ]
Ding, Ming [3 ]
Yang, Howard H. [4 ]
Shu, Feng [1 ]
Quek, Tony Q. S. [4 ]
Poor, H. Vincent [5 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Elect & Opt Engn, Nanjing, Peoples R China
[2] Natl Res Tomsk Polytech Univ, Dept Software Engn, Inst Cybernet, Tomsk, Russia
[3] CSIRO, Data61, Canberra, ACT, Australia
[4] Singapore Univ Technol & Design, Informat Syst Technol & Design Pillar, Singapore, Singapore
[5] Princeton Univ, Dept Elect Engn, Princeton, NJ 08544 USA
来源
IEEE NETWORK | 2020年 / 34卷 / 04期
关键词
Servers; Data privacy; Security; Privacy; Data models; Training; Convergence;
D O I
10.1109/MNET.001.1900506
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Motivated by the advancing computational capacity of wireless end-user equipment (UE), as well as the increasing concerns about sharing private data, a new machine learning (ML) paradigm has emerged, namely federated learning (FL). Specifically, FL allows a decoupling of data provision at UEs and ML model aggregation at a central unit. By training model locally, FL is capable of avoiding direct data leakage from the UEs, thereby preserving privacy and security to some extent. However, even if raw data are not disclosed from UEs, an individual's private information can still be extracted by some recently discovered attacks against the FL architecture. In this work, we analyze the privacy and security issues in FL, and discuss several challenges to preserving privacy and security when designing FL systems. In addition, we provide extensive simulation results to showcase the discussed issues and possible solutions.
引用
收藏
页码:242 / 248
页数:7
相关论文
共 15 条
  • [1] Deep Learning with Differential Privacy
    Abadi, Martin
    Chu, Andy
    Goodfellow, Ian
    McMahan, H. Brendan
    Mironov, Ilya
    Talwar, Kunal
    Zhang, Li
    [J]. CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, : 308 - 318
  • [2] [Anonymous], 2005, Protection of Location Privacy using Dummies for Location-based Services
  • [3] [Anonymous], 2018, ARXIV180907857
  • [4] UNTRACEABLE ELECTRONIC MAIL, RETURN ADDRESSES, AND DIGITAL PSEUDONYMS
    CHAUM, DL
    [J]. COMMUNICATIONS OF THE ACM, 1981, 24 (02) : 84 - 88
  • [5] CHEN R, 2019, CONCURR COMP-PRACT E, P102
  • [6] Calibrating noise to sensitivity in private data analysis
    Dwork, Cynthia
    McSherry, Frank
    Nissim, Kobbi
    Smith, Adam
    [J]. THEORY OF CRYPTOGRAPHY, PROCEEDINGS, 2006, 3876 : 265 - 284
  • [7] Geyer R. C., 2017, DIFFERENTIALLY PRIVA
  • [8] Konecn J., 2016, ARXIV161002527
  • [9] Machanavajjhala A., 2006, P 22 INT C DAT ENG, P24, DOI DOI 10.1186/1471-2288-10-70
  • [10] Papernot Nicolas, 2016, ARXIV161005755