Privacy-Preserving SGD on Shuffle Model

被引:0
作者
Zhang, Lingjie [1 ,2 ]
Zhang, Hai [1 ]
机构
[1] Northwest Univ, Sch Math, Xian 710127, Peoples R China
[2] Baoji Univ Arts & Sci, Sch Math & Informat Sci, Baoji 721000, Peoples R China
基金
中国国家自然科学基金;
关键词
DIFFERENTIAL PRIVACY;
D O I
10.1155/2023/4055950
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
In this paper, we consider an exceptional study of differentially private stochastic gradient descent (SGD) algorithms in the stochastic convex optimization (SCO). The majority of the existing literature requires that the losses have additional assumptions, such as the loss functions with Lipschitz, smooth and strongly convex, and uniformly bounded of the model parameters, or focus on the Euclidean (i.e. l(2)(d)) setting. However, these restrictive requirements exclude many popular losses, including the absolute loss and the hinge loss. By loosening the restrictions, we proposed two differentially private SGD without shuffle model and with shuffle model algorithms (in short, DP-SGD-NOS and DP-SGD-S) for the (a,L) -Holder smooth loss by adding calibrated Laplace noise under no shuffling scheme and shuffling scheme in the l(p)(d)-setting for p ? [1,2]. We provide privacy guarantees by using advanced composition and privacy amplification techniques. We also analyze the convergence bounds of the DP-SGD-NOS and DP-SGD-S and obtain the optimal excess population risks O(1/vn+vd log1/d/ne ) and O(1/vn+ vd log(1/d)log(n/d)/n((4+a)/(2(1+a)))e) up to logarithmic factors with gradient complexity O(n((2-a/1+a))+n). It turns out that the optimal utility bound with the shuffle model is superior to the bound without the shuffle model, which is consistent with the previous work. In addition, the DP-SGD-S achieves the optimal utility bound with the O(n) gradient computations of linearity for a = 1/2. There is a significant tradeoff between a,L-Holder smooth losses and gradient complexity for differential privacy of SGD without shuffle model and with shuffle model.
引用
收藏
页数:16
相关论文
共 40 条
  • [1] [Anonymous], 2008, Advances in neural information processing systems
  • [2] Arora R, 2020, Arxiv, DOI arXiv:2002.09609
  • [3] Asi Hilal, 2021, P MACHINE LEARNING R, P393
  • [4] Balle B., 2019, Improved summation from shuffling
  • [5] Balle B, 2019, Arxiv, DOI arXiv:1906.09116
  • [6] The Privacy Blanket of the Shuffle Model
    Balle, Borja
    Bell, James
    Gascon, Adria
    Nissim, Kobbi
    [J]. ADVANCES IN CRYPTOLOGY - CRYPTO 2019, PT II, 2019, 11693 : 638 - 667
  • [7] Bassily R, 2021, ADV NEUR IN, V34
  • [8] Bassily R, 2021, PR MACH LEARN RES, V134, P474
  • [9] Bhaskar R., 2010, KDD
  • [10] Large-Scale Machine Learning with Stochastic Gradient Descent
    Bottou, Leon
    [J]. COMPSTAT'2010: 19TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL STATISTICS, 2010, : 177 - 186