A Novel Approach for Differential Privacy-Preserving Federated Learning

被引:0
|
作者
Elgabli, Anis [1 ,2 ]
Mesbah, Wessam [2 ,3 ]
机构
[1] King Fahd Univ Petr & Minerals, Ind & Syst Engn Dept, Dhahran 31261, Saudi Arabia
[2] King Fahd Univ Petr & Minerals, Ctr Commun Syst & Sensing, Dhahran 31261, Saudi Arabia
[3] King Fahd Univ Petr & Minerals, Elect Engn Dept, Dhahran 31261, Saudi Arabia
来源
IEEE OPEN JOURNAL OF THE COMMUNICATIONS SOCIETY | 2025年 / 6卷
关键词
Perturbation methods; Noise; Computational modeling; Stochastic processes; Privacy; Servers; Federated learning; Differential privacy; Standards; Databases; differential privacy; gradient descent (GD); stochastic gradient descent (SGD);
D O I
10.1109/OJCOMS.2024.3521651
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this paper, we start with a comprehensive evaluation of the effect of adding differential privacy (DP) to federated learning (FL) approaches, focusing on methodologies employing global (stochastic) gradient descent (SGD/GD), and local SGD/GD techniques. These global and local techniques are commonly referred to as FedSGD/FedGD and FedAvg, respectively. Our analysis reveals that, as far as only one local iteration is performed by each client before transmitting to the parameter server (PS) for FedGD, both FedGD and FedAvg achieve the same accuracy/loss for the same privacy guarantees, despite requiring different perturbation noise power. Furthermore, we propose a novel DP mechanism, which is shown to ensure privacy without compromising performance. In particular, we propose the sharing of a random seed (or a specified sequence of random seeds) among collaborative clients, where each client uses this seed to introduces perturbations to its updates prior to transmission to the PS. Importantly, due to the random seed sharing, clients possess the capability to negate the noise effects and recover their original global model. This mechanism preserves privacy both at a "curious" PS or at external eavesdroppers without compromising the performance of the final model at each client, thus mitigating the risk of inversion attacks aimed at retrieving (partially or fully) the clients' data. Furthermore, the importance and effect of clipping in the practical implementation of DP mechanisms, in order to upper bound the perturbation noise, is discussed. Moreover, owing to the ability to cancel noise at individual clients, our proposed approach enables the introduction of arbitrarily high perturbation levels, and hence, clipping can be totally avoided, resulting in the same performance of noise-free standard FL approaches.
引用
收藏
页码:466 / 476
页数:11
相关论文
共 50 条
  • [21] Hercules: Boosting the Performance of Privacy-Preserving Federated Learning
    Xu, Guowen
    Han, Xingshuo
    Xu, Shengmin
    Zhang, Tianwei
    Li, Hongwei
    Huang, Xinyi
    Deng, Robert H.
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (05) : 4418 - 4433
  • [22] CRS-FL: Conditional Random Sampling for Communication-Efficient and Privacy-Preserving Federated Learning
    Wang, Jianhua
    Chang, Xiaolin
    Misic, Jelena
    Misic, Vojislav B.
    Li, Lin
    Yao, Yingying
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2025, 22 (01): : 198 - 208
  • [23] A Privacy-Preserving Federated Learning for Multiparty Data Sharing in Social IoTs
    Yin, Lihua
    Feng, Jiyuan
    Xun, Hao
    Sun, Zhe
    Cheng, Xiaochun
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2021, 8 (03): : 2706 - 2718
  • [24] PILE: Robust Privacy-Preserving Federated Learning Via Verifiable Perturbations
    Tang, Xiangyun
    Shen, Meng
    Li, Qi
    Zhu, Liehuang
    Xue, Tengfei
    Qu, Qiang
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (06) : 5005 - 5023
  • [25] DP-FedCMRS: Privacy-Preserving Federated Learning Algorithm to Solve Heterogeneous Data
    Zhang, Yang
    Long, Shigong
    Liu, Guangyuan
    Zhang, Junming
    IEEE ACCESS, 2025, 13 : 41984 - 41993
  • [26] GuardianAI: Privacy-preserving federated anomaly detection with differential privacy
    Alabdulatif, Abdulatif
    ARRAY, 2025, 26
  • [27] Privacy-preserving federated discovery of DNA motifs with differential privacy
    Chen, Yao
    Gan, Wensheng
    Huang, Gengsen
    Wu, Yongdong
    Yu, Philip S.
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 249
  • [28] Privacy-preserving federated learning on lattice quantization
    Zhang, Lingjie
    Zhang, Hai
    INTERNATIONAL JOURNAL OF WAVELETS MULTIRESOLUTION AND INFORMATION PROCESSING, 2023, 21 (06)
  • [29] AddShare: A Privacy-Preserving Approach for Federated Learning
    Asare, Bernard Atiemo
    Branco, Paula
    Kiringa, Iluju
    Yeap, Tet
    COMPUTER SECURITY. ESORICS 2023 INTERNATIONAL WORKSHOPS, PT I, 2024, 14398 : 299 - 309
  • [30] A Survey of Differential Privacy Techniques for Federated Learning
    Wang, Xin
    Li, Jiaqian
    Ding, Xueshuang
    Zhang, Haoji
    Sun, Lianshan
    IEEE ACCESS, 2025, 13 : 6539 - 6555