GFL-ALDPA: a gradient compression federated learning framework based on adaptive local differential privacy budget allocation

被引:6
作者
Yang, Jiawei [1 ]
Chen, Shuhong [1 ]
Wang, Guojun [1 ]
Wang, Zijia [1 ]
Jie, Zhiyong [1 ]
Arif, Muhammad [2 ]
机构
[1] Guangzhou Univ, Sch Comp Sci & Cyber Engn, Guangzhou 510006, Guangdong, Peoples R China
[2] Super Univ, Dept Comp Sci, Lahore 54000, Pakistan
基金
中国国家自然科学基金;
关键词
Federated learning; Differential privacy; Privacy-preserving; Gradient compression; Privacy budget allocation;
D O I
10.1007/s11042-023-16543-y
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning(FL) is a popular distributed machine learning framework which can protect users' private data from being exposed to adversaries. However, related work shows that sensitive private information can still be compromised by analyzing parameters uploaded by clients. Applying differential privacy to federated learning has been a popular privacy-preserving way to achieve strict privacy guarantees in recent years. To reduce the impact of noise, this paper proposes to apply local differential privacy(LDP) to federated learning. We propose a gradient compression federated learning framework based on adaptive local differential privacy budget allocation(GFL-ALDPA). We propose a novel adaptive privacy budget allocation scheme based on communication rounds to reduce the loss of privacy budget and the amount of model noise. It can maximize the limited privacy budget and improve the model accuracy by assigning different privacy budgets to different communication rounds during training. Furthermore, we also propose a gradient compression mechanism based on dimension reduction, which can reduce the communication cost, overall noise size, and loss of the total privacy budget of the model simultaneously to ensure accuracy under a specific privacy-preserving guarantee. Finally, this paper presents the experimental evaluation on the MINIST dataset. Theoretical analysis and experiments demonstrate that our framework can achieve a better trade-off between privacy preservation, communication efficiency, and model accuracy.
引用
收藏
页码:26349 / 26368
页数:20
相关论文
共 38 条
  • [1] Deep Learning with Differential Privacy
    Abadi, Martin
    Chu, Andy
    Goodfellow, Ian
    McMahan, H. Brendan
    Mironov, Ilya
    Talwar, Kunal
    Zhang, Li
    [J]. CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, : 308 - 318
  • [2] Scalable and Secure Logistic Regression via Homomorphic Encryption
    Aono, Yoshinori
    Hayashi, Takuya
    Le Trieu Phong
    Wang, Lihua
    [J]. CODASPY'16: PROCEEDINGS OF THE SIXTH ACM CONFERENCE ON DATA AND APPLICATION SECURITY AND PRIVACY, 2016, : 142 - 144
  • [3] Practical Secure Aggregation for Privacy-Preserving Machine Learning
    Bonawitz, Keith
    Ivanov, Vladimir
    Kreuter, Ben
    Marcedone, Antonio
    McMahan, H. Brendan
    Patel, Sarvar
    Ramage, Daniel
    Segal, Aaron
    Seth, Karn
    [J]. CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, : 1175 - 1191
  • [4] Chen Z., 2022, Journal of Computational and Cognitive Engineering, V1, P103, DOI [10.47852/bonviewJCCE149145205514, DOI 10.47852/BONVIEWJCCE149145205514]
  • [5] Multiauthority CP-ABE-based Access Control Model for IoT-enabled Healthcare Infrastructure
    Das, Sangjukta
    Namasudra, Suyel
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2023, 19 (01) : 821 - 829
  • [6] Ding Bolin, 2017, ADV NEURAL INFORM PR, P3571
  • [7] Differential privacy: A survey of results
    Dwork, Cynthia
    [J]. THEORY AND APPLICATIONS OF MODELS OF COMPUTATION, PROCEEDINGS, 2008, 4978 : 1 - 19
  • [8] Geiping J, ADV NEURAL INFORM PR, P16937
  • [9] Geyer R. C., 2017, arXiv
  • [10] A Novel Technique for Accelerating Live Migration in Cloud Computing
    Gupta, Ambika
    Namasudra, Suyel
    [J]. AUTOMATED SOFTWARE ENGINEERING, 2022, 29 (01)