FedDP-SA: Boosting Differentially Private Federated Learning via Local Data Set Splitting

被引:0
作者
Liu, Xuezheng [1 ]
Zhou, Yipeng [2 ]
Wu, Di [1 ]
Hu, Miao [1 ]
Hui Wang, Jessie [3 ,4 ]
Guizani, Mohsen [5 ]
机构
[1] Sun Yat Sen Univ, Sch Comp Sci & Engn, Guangdong Key Lab Big Data Anal & Proc, Guangzhou 510006, Peoples R China
[2] Macquarie Univ, Fac Sci & Engn, Dept Comp, Sydney, NSW 2109, Australia
[3] Tsinghua Univ, Inst Network Sci & Cyberspace, Beijing 100084, Peoples R China
[4] Tsinghua Univ, Beijing Natl Res Ctr Informat Sci & Technol, Beijing 100084, Peoples R China
[5] Mohamed bin Zayed Univ Artificial Intelligence, Machine Learning Dept, Abu Dhabi, U Arab Emirates
来源
IEEE INTERNET OF THINGS JOURNAL | 2024年 / 11卷 / 19期
基金
中国国家自然科学基金;
关键词
Noise; Privacy; Computational modeling; Data models; Differential privacy; Accuracy; Internet of Things; Data splitting; federated learning (FL); Gaussian mechanism; sensitivity and convergence rate;
D O I
10.1109/JIOT.2024.3421991
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) emerges as an attractive collaborative machine learning framework that enables training of models across decentralized devices by merely exposing model parameters. However, malicious attackers can still hijack communicated parameters to expose clients' raw samples resulting in privacy leakage. To defend against such attacks, differentially private FL (DPFL) is devised, which incurs negligible computation overhead in protecting privacy by adding noises. Nevertheless, the low model utility and communication efficiency makes DPFL hard to be deployed in the real environment. To overcome these deficiencies, we propose a novel DPFL algorithm called FedDP-SA (namely, federated learning with differential privacy by splitting Local data sets and averaging parameters). Specifically, FedDP-SA splits a local data set into multiple subsets for parameter updating. Then, parameters averaged over all subsets plus differential privacy (DP) noises are returned to the parameter server. FedDP-SA offers dual benefits: 1) enhancing model accuracy by efficiently lowering sensitivity, thereby reducing noise to ensure DP and 2) improving communication efficiency by communicating model parameters with a lower frequency. These advantages are validated through sensitivity analysis and convergence rate analysis. Finally, we conduct comprehensive experiments to verify the performance of FedDP-SA compared with other state-of-the-art baseline algorithms.
引用
收藏
页码:31687 / 31698
页数:12
相关论文
共 26 条
  • [21] On the impact of non-IID data on the performance and fairness of differentially private federated learning
    Amiri, Saba
    Belloum, Adam
    Nalisnick, Eric
    Klous, Sander
    Gommans, Leon
    52ND ANNUAL IEEE/IFIP INTERNATIONAL CONFERENCE ON DEPENDABLE SYSTEMS AND NETWORKS WORKSHOP VOLUME (DSN-W 2022), 2022, : 52 - 58
  • [22] DP-FL: a novel differentially private federated learning framework for the unbalanced data
    Huang, Xixi
    Ding, Ye
    Jiang, Zoe L.
    Qi, Shuhan
    Wang, Xuan
    Liao, Qing
    WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2020, 23 (04): : 2529 - 2545
  • [23] DP-FL: a novel differentially private federated learning framework for the unbalanced data
    Xixi Huang
    Ye Ding
    Zoe L. Jiang
    Shuhan Qi
    Xuan Wang
    Qing Liao
    World Wide Web, 2020, 23 : 2529 - 2545
  • [24] Machine Learning Model Generation With Copula-Based Synthetic Dataset for Local Differentially Private Numerical Data
    Sei, Yuichi
    Onesimu, J. Andrew
    Ohsuga, Akihiko
    IEEE ACCESS, 2022, 10 : 101656 - 101671
  • [25] Squeezing More Utility via Adaptive Clipping on Differentially Private Gradients in Federated Meta-Learning
    Wang, Ning
    Xiao, Yang
    Chen, Yimin
    Zhang, Ning
    Lou, Wenjing
    Hou, Y. Thomas
    PROCEEDINGS OF THE 38TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE, ACSAC 2022, 2022, : 647 - 657
  • [26] Assessing Wearable Human Activity Recognition Systems Against Data Poisoning Attacks in Differentially-Private Federated Learning
    Shahid, Abdur R.
    Imteaj, Ahmed
    Badsha, Shahriar
    Hossain, Md Zarif
    2023 IEEE INTERNATIONAL CONFERENCE ON SMART COMPUTING, SMARTCOMP, 2023, : 355 - 360