Analysing Utility Loss in Federated Learning with Differential Privacy

被引:0
作者
Pustozerova, Anastasia [1 ]
Baumbach, Jan [2 ]
Mayer, Rudolf [1 ]
机构
[1] SBA Res, Vienna, Austria
[2] Univ Hamburg, Inst Computat Syst Biol, Hamburg, Germany
来源
2023 IEEE 22ND INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, BIGDATASE, CSE, EUC, ISCI 2023 | 2024年
关键词
Federated Learning; Differential Privacy; Output Perturbation; DP-SGD;
D O I
10.1109/TrustCom60117.2023.00167
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated learning provides the solution when multiple parties want to collaboratively train a machine learning model without directly sharing sensitive data. In Federated Learning, each party trains a machine learning model locally on its private data and sends only the models' weights or updates (gradients) to an aggregator, which averages locally trained models into a new global model with higher effectiveness. However, the machine learning models, which have to be shared during the federated learning process, can still leak sensitive information about their training data through e.g. membership inference attacks. Differential Privacy (DP) can mitigate privacy risks in federated learning by introducing noise into machine learning models. In this work, we consider two approaches for achieving Differential Privacy in federated learning: (i) output perturbation of the trained machine learning models and (ii) a differentially-private form of stochastic gradient descent (DP-SGD). We perform an extensive analysis of these two approaches in several federated settings and compare their performance in terms of model utility and achieved privacy. We observe that DP-SGD allows for a better trade-off between privacy and utility.
引用
收藏
页码:1230 / 1235
页数:6
相关论文
共 22 条
  • [1] Deep Learning with Differential Privacy
    Abadi, Martin
    Chu, Andy
    Goodfellow, Ian
    McMahan, H. Brendan
    Mironov, Ilya
    Talwar, Kunal
    Zhang, Li
    [J]. CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, : 308 - 318
  • [2] Federated learning and differential privacy for medical image analysis
    Adnan, Mohammed
    Kalra, Shivam
    Cresswell, Jesse C.
    Taylor, Graham W.
    Tizhoosh, Hamid R.
    [J]. SCIENTIFIC REPORTS, 2022, 12 (01)
  • [3] Canetti R., 1996, 28 ANN ACM S THEOR C
  • [4] Chaudhuri K., 2009, P ADV NEUR INF PROC, P289
  • [5] Chaudhuri K, 2011, J MACH LEARN RES, V12, P1069
  • [6] Calibrating noise to sensitivity in private data analysis
    Dwork, Cynthia
    McSherry, Frank
    Nissim, Kobbi
    Smith, Adam
    [J]. THEORY OF CRYPTOGRAPHY, PROCEEDINGS, 2006, 3876 : 265 - 284
  • [7] The Algorithmic Foundations of Differential Privacy
    Dwork, Cynthia
    Roth, Aaron
    [J]. FOUNDATIONS AND TRENDS IN THEORETICAL COMPUTER SCIENCE, 2013, 9 (3-4): : 211 - 406
  • [8] Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures
    Fredrikson, Matt
    Jha, Somesh
    Ristenpart, Thomas
    [J]. CCS'15: PROCEEDINGS OF THE 22ND ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2015, : 1322 - 1333
  • [9] Fully Homomorphic Encryption Using Ideal Lattices
    Gentry, Craig
    [J]. STOC'09: PROCEEDINGS OF THE 2009 ACM SYMPOSIUM ON THEORY OF COMPUTING, 2009, : 169 - 178
  • [10] Geyer R. C., 2017, arXiv