A Novel Approach for Differential Privacy-Preserving Federated Learning

被引:0
|
作者
Elgabli, Anis [1 ,2 ]
Mesbah, Wessam [2 ,3 ]
机构
[1] King Fahd Univ Petr & Minerals, Ind & Syst Engn Dept, Dhahran 31261, Saudi Arabia
[2] King Fahd Univ Petr & Minerals, Ctr Commun Syst & Sensing, Dhahran 31261, Saudi Arabia
[3] King Fahd Univ Petr & Minerals, Elect Engn Dept, Dhahran 31261, Saudi Arabia
来源
IEEE OPEN JOURNAL OF THE COMMUNICATIONS SOCIETY | 2025年 / 6卷
关键词
Perturbation methods; Noise; Computational modeling; Stochastic processes; Privacy; Servers; Federated learning; Differential privacy; Standards; Databases; differential privacy; gradient descent (GD); stochastic gradient descent (SGD);
D O I
10.1109/OJCOMS.2024.3521651
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this paper, we start with a comprehensive evaluation of the effect of adding differential privacy (DP) to federated learning (FL) approaches, focusing on methodologies employing global (stochastic) gradient descent (SGD/GD), and local SGD/GD techniques. These global and local techniques are commonly referred to as FedSGD/FedGD and FedAvg, respectively. Our analysis reveals that, as far as only one local iteration is performed by each client before transmitting to the parameter server (PS) for FedGD, both FedGD and FedAvg achieve the same accuracy/loss for the same privacy guarantees, despite requiring different perturbation noise power. Furthermore, we propose a novel DP mechanism, which is shown to ensure privacy without compromising performance. In particular, we propose the sharing of a random seed (or a specified sequence of random seeds) among collaborative clients, where each client uses this seed to introduces perturbations to its updates prior to transmission to the PS. Importantly, due to the random seed sharing, clients possess the capability to negate the noise effects and recover their original global model. This mechanism preserves privacy both at a "curious" PS or at external eavesdroppers without compromising the performance of the final model at each client, thus mitigating the risk of inversion attacks aimed at retrieving (partially or fully) the clients' data. Furthermore, the importance and effect of clipping in the practical implementation of DP mechanisms, in order to upper bound the perturbation noise, is discussed. Moreover, owing to the ability to cancel noise at individual clients, our proposed approach enables the introduction of arbitrarily high perturbation levels, and hence, clipping can be totally avoided, resulting in the same performance of noise-free standard FL approaches.
引用
收藏
页码:466 / 476
页数:11
相关论文
共 50 条
  • [31] Differential Privacy for Deep and Federated Learning: A Survey
    El Ouadrhiri, Ahmed
    Abdelhadi, Ahmed
    IEEE ACCESS, 2022, 10 : 22359 - 22380
  • [32] Privacy-Preserving Robust Federated Learning with Distributed Differential Privacy
    Wang, Fayao
    He, Yuanyuan
    Guo, Yunchuan
    Li, Peizhi
    Wei, Xinyu
    2022 IEEE INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, 2022, : 598 - 605
  • [33] Towards driver distraction detection: a privacy-preserving federated learning approach
    Wenguang Zhou
    Zhiwei Jia
    Chao Feng
    Huali Lu
    Feng Lyu
    Ling Li
    Peer-to-Peer Networking and Applications, 2024, 17 : 896 - 910
  • [34] Towards driver distraction detection: a privacy-preserving federated learning approach
    Zhou, Wenguang
    Jia, Zhiwei
    Feng, Chao
    Lu, Huali
    Lyu, Feng
    Li, Ling
    PEER-TO-PEER NETWORKING AND APPLICATIONS, 2024, 17 (02) : 896 - 910
  • [35] Toward Secure Weighted Aggregation for Privacy-Preserving Federated Learning
    He, Yunlong
    Yu, Jia
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 3475 - 3488
  • [36] ELXGB: An Efficient and Privacy-Preserving XGBoost for Vertical Federated Learning
    Xu, Wei
    Zhu, Hui
    Zheng, Yandong
    Wang, Fengwei
    Zhao, Jiaqi
    Liu, Zhe
    Li, Hui
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2024, 17 (03) : 878 - 892
  • [37] Communication-Efficient Personalized Federated Learning With Privacy-Preserving
    Wang, Qian
    Chen, Siguang
    Wu, Meng
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2024, 21 (02): : 2374 - 2388
  • [38] Privacy-Preserving Federated Learning based on Differential Privacy and Momentum Gradient Descent
    Weng, Shangyin
    Zhang, Lei
    Feng, Daquan
    Feng, Chenyuan
    Wang, Ruiyu
    Klaine, Paulo Valente
    Imran, Muhammad Ali
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [39] Privacy-Preserving Multilayer Community Detection via Federated Learning
    Ma, Shi-Yao
    Xu, Xiao-Ke
    Xiao, Jing
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2024,
  • [40] Privacy-Preserving Approach PBCN in Social Network With Differential Privacy
    Huang, Haiping
    Zhang, Dongjun
    Xiao, Fu
    Wang, Kai
    Gu, Jiateng
    Wang, Ruchuan
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2020, 17 (02): : 931 - 945