A Novel Approach for Differential Privacy-Preserving Federated Learning

被引:0
|
作者
Elgabli, Anis [1 ,2 ]
Mesbah, Wessam [2 ,3 ]
机构
[1] King Fahd Univ Petr & Minerals, Ind & Syst Engn Dept, Dhahran 31261, Saudi Arabia
[2] King Fahd Univ Petr & Minerals, Ctr Commun Syst & Sensing, Dhahran 31261, Saudi Arabia
[3] King Fahd Univ Petr & Minerals, Elect Engn Dept, Dhahran 31261, Saudi Arabia
关键词
Perturbation methods; Noise; Computational modeling; Stochastic processes; Privacy; Servers; Federated learning; Differential privacy; Standards; Databases; differential privacy; gradient descent (GD); stochastic gradient descent (SGD);
D O I
10.1109/OJCOMS.2024.3521651
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this paper, we start with a comprehensive evaluation of the effect of adding differential privacy (DP) to federated learning (FL) approaches, focusing on methodologies employing global (stochastic) gradient descent (SGD/GD), and local SGD/GD techniques. These global and local techniques are commonly referred to as FedSGD/FedGD and FedAvg, respectively. Our analysis reveals that, as far as only one local iteration is performed by each client before transmitting to the parameter server (PS) for FedGD, both FedGD and FedAvg achieve the same accuracy/loss for the same privacy guarantees, despite requiring different perturbation noise power. Furthermore, we propose a novel DP mechanism, which is shown to ensure privacy without compromising performance. In particular, we propose the sharing of a random seed (or a specified sequence of random seeds) among collaborative clients, where each client uses this seed to introduces perturbations to its updates prior to transmission to the PS. Importantly, due to the random seed sharing, clients possess the capability to negate the noise effects and recover their original global model. This mechanism preserves privacy both at a "curious" PS or at external eavesdroppers without compromising the performance of the final model at each client, thus mitigating the risk of inversion attacks aimed at retrieving (partially or fully) the clients' data. Furthermore, the importance and effect of clipping in the practical implementation of DP mechanisms, in order to upper bound the perturbation noise, is discussed. Moreover, owing to the ability to cancel noise at individual clients, our proposed approach enables the introduction of arbitrarily high perturbation levels, and hence, clipping can be totally avoided, resulting in the same performance of noise-free standard FL approaches.
引用
收藏
页码:466 / 476
页数:11
相关论文
共 50 条
  • [31] Privacy-Preserving Federated Learning in Fog Computing
    Zhou, Chunyi
    Fu, Anmin
    Yu, Shui
    Yang, Wei
    Wang, Huaqun
    Zhang, Yuqing
    IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (11): : 10782 - 10793
  • [32] Federated Learning for Privacy-Preserving Speaker Recognition
    Woubie, Abraham
    Backstrom, Tom
    IEEE ACCESS, 2021, 9 : 149477 - 149485
  • [33] Privacy-Preserving Decentralized Aggregation for Federated Learning
    Jeon, Beomyeol
    Ferdous, S. M.
    Rahmant, Muntasir Raihan
    Walid, Anwar
    IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (IEEE INFOCOM WKSHPS 2021), 2021,
  • [34] GAIN: Decentralized Privacy-Preserving Federated Learning
    Jiang, Changsong
    Xu, Chunxiang
    Cao, Chenchen
    Chen, Kefei
    JOURNAL OF INFORMATION SECURITY AND APPLICATIONS, 2023, 78
  • [35] Privacy-Preserving Federated Learning via Disentanglement
    Zhou, Wenjie
    Li, Piji
    Han, Zhaoyang
    Lu, Xiaozhen
    Li, Juan
    Ren, Zhaochun
    Liu, Zhe
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 3606 - 3615
  • [36] Privacy-preserving Decentralized Federated Deep Learning
    Zhu, Xudong
    Li, Hui
    PROCEEDINGS OF ACM TURING AWARD CELEBRATION CONFERENCE, ACM TURC 2021, 2021, : 33 - 38
  • [37] PRIVACY-PRESERVING SERVICES USING FEDERATED LEARNING
    Taylor, Paul
    Kiss, Stephanie
    Gullon, Lucy
    Yearling, David
    Journal of the Institute of Telecommunications Professionals, 2022, 16 : 16 - 22
  • [38] Privacy-Preserving and Reliable Distributed Federated Learning
    Dong, Yipeng
    Zhang, Lei
    Xu, Lin
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2023, PT I, 2024, 14487 : 130 - 149
  • [39] Improved Privacy-Preserving Aggregation for Federated Learning
    Li, Yu
    Han, Yiliang
    Zhou, Tanping
    Xie, Huiyu
    Wu, Xuguang
    Song, Chaoyue
    2024 9TH INTERNATIONAL CONFERENCE ON COMPUTER AND COMMUNICATION SYSTEMS, ICCCS 2024, 2024, : 272 - 276
  • [40] Measuring Contributions in Privacy-Preserving Federated Learning
    Pejo, Balazs
    Biczok, Gergely
    Acs, Gergely
    ERCIM NEWS, 2021, (126): : 35 - 36