Bidirectional adaptive differential privacy federated learning scheme

被引:0
作者
Li, Yang [1 ,2 ]
Xu, Jin [1 ]
Zhu, Jianming [1 ]
Wang, Youwei [1 ,2 ]
机构
[1] School of Information, Central University of Finance and Economics, Beijing
[2] Ministry of Education, Engineering Research Center of State Financial Security, Central University of Finance and Economics, Beijing
来源
Xi'an Dianzi Keji Daxue Xuebao/Journal of Xidian University | 2024年 / 51卷 / 03期
关键词
bidirectional adaptive noise; differential privacy; federated learning; RMSprop; sampling;
D O I
10.19665/j.issn1001-2400.20230706
中图分类号
学科分类号
摘要
With the explosive growth of personal data, the federated learning based on differential privacy can be used to solve the problem of data islands and preserve user data privacy. Participants share the parameters with noise to the central server for aggregation by training local data, and realize distributed machine learning training. However, there are two defects in this model: on the one hand, the data information in the process of parameters broadcasting by the central server is still compromised, with the risk of user privacy leakage; on the other hand, adding too much noise to parameters will reduce the quality of parameter aggregation and affect the model accuracy of federated learning. In order to solve the above problems, a bidirectional adaptive differential privacy federated learning scheme ( Federated Learning Approach with Bidirectional Adaptive Differential Privacy, FedBADP) is proposed, which can adaptively add noise to the gradients transmitted by participants and central servers, and keep data security without affecting the model accuracy. Meanwhile, considering the performance limitations of the participants hardware devices, this model samples their gradients to reduce the communication overhead, and uses the RMSprop to accelerate the convergence of the model on the participants and central server to improve the accuracy of the model. Experiments show that our novel model can enhance the user privacy preserving while maintaining a good accuracy. © 2024 ournal of Xidian University. All Rights Reserved.
引用
收藏
页码:158 / 169
页数:11
相关论文
共 25 条
  • [1] MCMAHAN H B, MOORE E, RAMAGE D, Communication-Efficient Learning of Deep Networks from Decentralized Data [C], Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, pp. 1273-1282, (2017)
  • [2] JIANG J, KANTARCI B, OKTUG S F, Et al., Federated Learning in Smart City Sensing: Challenges and Opportunities, Sensors, 20, 21, (2020)
  • [3] XLT J, GLICKSBERG B S, SU C, Et al., Federated Learning for Healthcare Informatics [J], Journal of Healthcare Informatics Research, 5, 1, pp. 1-19, (2021)
  • [4] TU X, ZHU K, LUONG N C, Et al., Incentive Mechanisms for Federated Learning: From Economic and Game Theoretic Perspectivef 2021) D/OL]
  • [5] CHEN Jiayi, SLTN Chenyu, ZHOU Xintong, Et al., Local Privacy Protection for Power Data Prediction Model Based on Federated Learning and Homomorphic Encryption, Information Security Research, 9, 3, pp. 228-234, (2023)
  • [6] XU Hua, TIAN Youliang, Protection of Privacy of the Weighted Social Network under Differential Privacy, Journal of Xidian University, 49, 1, pp. 17-25, (2022)
  • [7] WANG F., XIE M, TAN Z, Et al., Preserving Differential Privacy in Deep Learning Based on F'eature Relevance Region Segmentation, IEEE Transactions on Emerging Topics in Computing, 12, 1, pp. 307-315, (2023)
  • [8] FU J, CHEN Z, HAN X., Adap DP-FL: Differentially Private Federated Learning with Adaptive Noise [C], 2022 IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom), pp. 656-663, (2022)
  • [9] Yong SLT, LILT Wenlong, Shenglong LILT, Et al., Secure Protection Method for Federated Learning Model Based on Secure Shuffling and Differential Privacy[j], Information Security Research, 8, 3, pp. 270-276, (2022)
  • [10] YAN Yan, DONG Zhuofei, F'ei XLT, Et al., Localized Location Privacy Protection Method LTsing the Hilbert Encoding, Journal of Xidian University, 50, 2, pp. 147-160, (2022)