Binary Federated Learning with Client-Level Differential Privacy

被引:1
作者
Liu, Lumin [1 ]
Zhang, Jun [1 ]
Song, Shenghui [1 ]
Letaief, Khaled B. [1 ]
机构
[1] Hong Kong Univ Sci & Technol, Dept ECE, Hong Kong, Peoples R China
来源
IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM | 2023年
关键词
D O I
10.1109/GLOBECOM54140.2023.10437593
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Federated learning (FL) is a privacy-preserving collaborative learning framework, and differential privacy can be applied to further enhance its privacy protection. Existing FL systems typically adopt Federated Average (FedAvg) as the training algorithm and implement differential privacy with a Gaussian mechanism. However, the inherent privacy-utility trade-off in these systems severely degrades the training performance if a tight privacy budget is enforced. Besides, the Gaussian mechanism requires model weights to be of high-precision. To improve communication efficiency and achieve a better privacy-utility trade-off, we propose a communication-efficient FL training algorithm with differential privacy guarantee. Specifically, we propose to adopt binary neural networks (BNNs) and introduce discrete noise in the FL setting. Binary model parameters are uploaded for higher communication efficiency and discrete noise is added to achieve the client-level differential privacy protection. The achieved performance guarantee is rigorously proved, and it is shown to depend on the level of discrete noise. Experimental results based on MNIST and Fashion-MNIST datasets will demonstrate that the proposed training algorithm achieves client-level privacy protection with performance gain while enjoying the benefits of low communication overhead from binary model updates.
引用
收藏
页码:3849 / 3854
页数:6
相关论文
共 23 条
  • [1] Deep Learning with Differential Privacy
    Abadi, Martin
    Chu, Andy
    Goodfellow, Ian
    McMahan, H. Brendan
    Mironov, Ilya
    Talwar, Kunal
    Zhang, Li
    [J]. CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, : 308 - 318
  • [2] Agarwal N, 2018, ADV NEUR IN, V31
  • [3] Bonawitz K., 2016, arXiv preprint arXiv:1611.04482
  • [4] The Algorithmic Foundations of Differential Privacy
    Dwork, Cynthia
    Roth, Aaron
    [J]. FOUNDATIONS AND TRENDS IN THEORETICAL COMPUTER SCIENCE, 2013, 9 (3-4): : 211 - 406
  • [5] Geyer R. C., 2017, arXiv preprint arXiv:1712.07557
  • [6] Agent-Level Differentially Private Federated Learning via Compressed Model Perturbation
    Guo, Yuanxiong
    Hu, Rui
    Gong, Yanmin
    [J]. 2022 IEEE CONFERENCE ON COMMUNICATIONS AND NETWORK SECURITY (CNS), 2022, : 127 - 135
  • [7] Hard A., 2018, ARXIV181103604
  • [8] GradViT: Gradient Inversion of Vision Transformers
    Hatamizadeh, Ali
    Yin, Hongxu
    Roth, Holger
    Li, Wenqi
    Kautz, Jan
    Xu, Daguang
    Molchanov, Pavlo
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 10011 - 10020
  • [9] Hubara I, 2016, ADV NEUR IN, V29
  • [10] Jin R., 2020, ARXIV200210940