Minimum Gaussian Noise Variance of Federated Learning in the Presence of Mutual Information Based Differential Privacy

被引:0
作者
He, Hua [1 ,2 ,3 ]
He, Zheng [4 ]
机构
[1] Chongqing Technol & Business Univ, Sch Business Adm, Chongqing 400067, Peoples R China
[2] Geely Univ China, Sch Business, Chengdu 610000, Peoples R China
[3] Krirk Univ, Int Coll, Bangkok 10220, Thailand
[4] Southwest Jiaotong Univ, Sch Informat Sci & Technol, Chengdu 610031, Peoples R China
关键词
Servers; Data privacy; Data models; Training; Privacy; Gaussian noise; Solid modeling; Differential privacy; Federated learning; Mutual information; federated learning; mutual information; privacy-utility trade-off; TRADEOFFS;
D O I
10.1109/ACCESS.2023.3323020
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL), which not only protects data security and privacy, but also training models on distributed devices, receives much attention in the literature. Traditionally, stochastic gradient descent (SGD) is used in FL since its excellent empirical performance, but user's private information can still be leaked by analyzing weight updates during FL iterations. Differential privacy (DP) is an effective way to solve this privacy leakage problem, which adds noise to the user's data gradient and this artificial noise helps to prevent information leakage. However, note that the SGD based FL with DP is not yet investigated with a comprehensive theoretical analysis considering privacy and data utility jointly, especially from the information-theoretic aspect. In this paper, we investigate the FL in the presence of mutual information based DP (MI-DP). Specifically, first, Gaussian DP mechanism is applied to either clients or central server of the FL model, and privacy and utility of the FL model are characterized by conditional mutual information and distortion, respectively. For a given privacy budget, we establish lower bounds on the variance of the Gaussian noise added to clients or central server of the FL model, and show that the utility of the global model remains the same for both cases. Next, we study the privacy-utility trade-off problem by considering a more general case, where both the model parameter and the privacy requirement of the clients are flexible. A privacy-preserving scheme is proposed, which maximizes the utility of the global model while different privacy requirements of all clients are preserved. Finally, the results of this paper are further explained by experimental results.
引用
收藏
页码:111212 / 111225
页数:14
相关论文
共 26 条
  • [1] A Communication-Efficient Local Differentially Private Algorithm in Federated Optimization
    Alam, Syed Eqbal
    Shukla, Dhirendra
    Rao, Shrisha
    [J]. IEEE ACCESS, 2023, 11 : 58254 - 58268
  • [2] Alvim Mario S., 2012, Formal Aspects of Security and Trust. 8th International Workshop, FAST 2011. Revised Selected Papers, P39, DOI 10.1007/978-3-642-29420-4_3
  • [3] Andrew G, 2021, ADV NEUR IN, V34
  • [4] McMahan HB, 2018, Arxiv, DOI arXiv:1710.06963
  • [5] Cuff P., 2016, P 2016 ACM SIGSAC C, P43, DOI [10.1145/2976749.2978308, DOI 10.1145/2976749.2978308]
  • [6] Dwork C., 2006, P AUT LANG PROGR 33, P1
  • [7] El Gamal A., 2011, Network Information Theory, P57
  • [8] Geyer R.C., 2017, arXiv
  • [9] Hayes J, 2018, Arxiv, DOI arXiv:1705.07663
  • [10] He Z., 2022, P IEEE ICCC, P1