A Novel Approach for Differential Privacy-Preserving Federated Learning

被引:0
|
作者
Elgabli, Anis [1 ,2 ]
Mesbah, Wessam [2 ,3 ]
机构
[1] King Fahd Univ Petr & Minerals, Ind & Syst Engn Dept, Dhahran 31261, Saudi Arabia
[2] King Fahd Univ Petr & Minerals, Ctr Commun Syst & Sensing, Dhahran 31261, Saudi Arabia
[3] King Fahd Univ Petr & Minerals, Elect Engn Dept, Dhahran 31261, Saudi Arabia
来源
IEEE OPEN JOURNAL OF THE COMMUNICATIONS SOCIETY | 2025年 / 6卷
关键词
Perturbation methods; Noise; Computational modeling; Stochastic processes; Privacy; Servers; Federated learning; Differential privacy; Standards; Databases; differential privacy; gradient descent (GD); stochastic gradient descent (SGD);
D O I
10.1109/OJCOMS.2024.3521651
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this paper, we start with a comprehensive evaluation of the effect of adding differential privacy (DP) to federated learning (FL) approaches, focusing on methodologies employing global (stochastic) gradient descent (SGD/GD), and local SGD/GD techniques. These global and local techniques are commonly referred to as FedSGD/FedGD and FedAvg, respectively. Our analysis reveals that, as far as only one local iteration is performed by each client before transmitting to the parameter server (PS) for FedGD, both FedGD and FedAvg achieve the same accuracy/loss for the same privacy guarantees, despite requiring different perturbation noise power. Furthermore, we propose a novel DP mechanism, which is shown to ensure privacy without compromising performance. In particular, we propose the sharing of a random seed (or a specified sequence of random seeds) among collaborative clients, where each client uses this seed to introduces perturbations to its updates prior to transmission to the PS. Importantly, due to the random seed sharing, clients possess the capability to negate the noise effects and recover their original global model. This mechanism preserves privacy both at a "curious" PS or at external eavesdroppers without compromising the performance of the final model at each client, thus mitigating the risk of inversion attacks aimed at retrieving (partially or fully) the clients' data. Furthermore, the importance and effect of clipping in the practical implementation of DP mechanisms, in order to upper bound the perturbation noise, is discussed. Moreover, owing to the ability to cancel noise at individual clients, our proposed approach enables the introduction of arbitrarily high perturbation levels, and hence, clipping can be totally avoided, resulting in the same performance of noise-free standard FL approaches.
引用
收藏
页码:466 / 476
页数:11
相关论文
共 50 条
  • [1] Staged Noise Perturbation for Privacy-Preserving Federated Learning
    Li, Zhe
    Chen, Honglong
    Gao, Yudong
    Ni, Zhichen
    Xue, Huansheng
    Shao, Huajie
    IEEE TRANSACTIONS ON SUSTAINABLE COMPUTING, 2024, 9 (06): : 936 - 947
  • [2] FedMDO: Privacy-Preserving Federated Learning via Mixup Differential Objective
    You, Xianyao
    Liu, Caiyun
    Li, Jun
    Sun, Yan
    Liu, Ximeng
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (10) : 10449 - 10463
  • [3] A Framework for Privacy-Preserving in IoV Using Federated Learning With Differential Privacy
    Adnan, Muhammad
    Syed, Madiha Haider
    Anjum, Adeel
    Rehman, Semeen
    IEEE ACCESS, 2025, 13 : 13507 - 13521
  • [4] PPeFL: Privacy-Preserving Edge Federated Learning With Local Differential Privacy
    Wang, Baocang
    Chen, Yange
    Jiang, Hang
    Zhao, Zhen
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (17) : 15488 - 15500
  • [5] PFLF: Privacy-Preserving Federated Learning Framework for Edge Computing
    Zhou, Hao
    Yang, Geng
    Dai, Hua
    Liu, Guoxiu
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 : 1905 - 1918
  • [6] Privacy-Preserving Federated Edge Learning: Modeling and Optimization
    Liu, Tianyu
    Di, Boya
    Song, Lingyang
    IEEE COMMUNICATIONS LETTERS, 2022, 26 (07) : 1489 - 1493
  • [7] Pain-FL: Personalized Privacy-Preserving Incentive for Federated Learning
    Sun, Peng
    Che, Haoxuan
    Wang, Zhibo
    Wang, Yuwei
    Wang, Tao
    Wu, Liantao
    Shao, Huajie
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2021, 39 (12) : 3805 - 3820
  • [8] Federated Learning for Privacy-Preserving Speaker Recognition
    Woubie, Abraham
    Backstrom, Tom
    IEEE ACCESS, 2021, 9 : 149477 - 149485
  • [9] VPPFL: Verifiable Privacy-Preserving Federated Learning in Cloud Environment
    Wang, Huiyong
    Yang, Tengfei
    Ding, Yong
    Tang, Shijie
    Wang, Yujue
    IEEE ACCESS, 2024, 12 : 151998 - 152008
  • [10] Privacy Preserving Federated Learning: A Novel Approach for Combining Differential Privacy and Homomorphic Encryption
    Aziz, Rezak
    Banerjee, Soumya
    Bouzefrane, Samia
    INFORMATION SECURITY THEORY AND PRACTICE, WISTP 2024, 2024, 14625 : 162 - 177