Matrix Gaussian Mechanisms for Differentially-Private Learning

被引:6
作者
Yang, Jungang [1 ]
Xiang, Liyao [1 ]
Yu, Jiahao [1 ]
Wang, Xinbing [1 ]
Guo, Bin [2 ]
Li, Zhetao [3 ]
Li, Baochun [4 ]
机构
[1] Shanghai Jiao Tong Univ, Shanghai 200240, Peoples R China
[2] Northwestern Polytech Univ, Xian 710072, Shaanxi, Peoples R China
[3] Xiangtan Univ, Xiangtan 411105, Hunan, Peoples R China
[4] Univ Toronto, Toronto, ON M5S, Canada
基金
国家重点研发计划;
关键词
Differential privacy; Covariance matrices; Collaborative work; Data models; Privacy; Gaussian distribution; Sensitivity; machine learning; data mining; data privacy;
D O I
10.1109/TMC.2021.3093316
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The wide deployment of machine learning algorithms has become a severe threat to user data privacy. As the learning data is of high dimensionality and high orders, preserving its privacy is intrinsically hard. Conventional differential privacy mechanisms often incur significant utility decline as they are designed for scalar values from the start. We recognize that it is because conventional approaches do not take the data structural information into account, and fail to provide sufficient privacy or utility. As the main novelty of this work, we propose Matrix Gaussian Mechanism (MGM), a new $ (\epsilon,\delta)$(e,d)-differential privacy mechanism for preserving learning data privacy. By imposing the unimodal distributions on the noise, we introduce two mechanisms based on MGM with an improved utility. We further show that with the utility space available, the proposed mechanisms can be instantiated with optimized utility, and has a closed-form solution scalable to large-scale problems. We experimentally show that our mechanisms, applied to privacy-preserving federated learning, are superior than the state-of-the-art differential privacy mechanisms in utility.
引用
收藏
页码:1036 / 1048
页数:13
相关论文
共 41 条
  • [1] Deep Learning with Differential Privacy
    Abadi, Martin
    Chu, Andy
    Goodfellow, Ian
    McMahan, H. Brendan
    Mironov, Ilya
    Talwar, Kunal
    Zhang, Li
    [J]. CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, : 308 - 318
  • [2] How to Accurately and Privately Identify Anomalies
    Asif, Hafiz
    Papakonstantinou, Periklis A.
    Vaidya, Jaideep
    [J]. PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, : 719 - 736
  • [3] Balle B, 2018, PR MACH LEARN RES, V80
  • [4] Private Empirical Risk Minimization: Efficient Algorithms and Tight Error Bounds
    Bassily, Raef
    Smith, Adam
    Thakurta, Abhradeep
    [J]. 2014 55TH ANNUAL IEEE SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS 2014), 2014, : 464 - 473
  • [5] Beimel A, 2010, LECT NOTES COMPUT SC, V5978, P437, DOI 10.1007/978-3-642-11799-2_26
  • [6] Boyd S., 2004, CONVEX OPTIMIZATION, DOI 10.1017/CBO9780511804441
  • [7] Securely Sampling Biased Coins with Applications to Differential Privacy
    Champion, Jeffrey
    Shelat, Abhi
    Ullman, Jonathan
    [J]. PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, : 603 - 614
  • [8] MVG Mechanism: Differential Privacy under Matrix-Valued Query
    Chanyaswad, Thee
    Dytso, Alex
    Poor, H. Vincent
    Mittal, Prateek
    [J]. PROCEEDINGS OF THE 2018 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'18), 2018, : 230 - 246
  • [9] Chaudhuri K, 2011, J MACH LEARN RES, V12, P1069
  • [10] Dua D., 2017, UCI MACHINE LEARNING