Laplacian Smoothing Stochastic ADMMs With Differential Privacy Guarantees

被引:6
作者
Liu, Yuanyuan [1 ]
Geng, Jiacheng [1 ]
Shang, Fanhua [2 ,3 ]
An, Weixin [1 ]
Liu, Hongying [1 ,3 ]
Zhu, Qi [1 ]
Feng, Wei [2 ,4 ]
机构
[1] Xidian Univ, Sch Artificial Intelligence, Key Lab Intelligent Percept & Image Understanding, Minist Educ, Xian 710071, Peoples R China
[2] Tianjin Univ, Coll Intelligence & Comp, Sch Comp Sci & Technol, Tianjin 300350, Peoples R China
[3] Peng Cheng Lab, Shenzhen 518055, Peoples R China
[4] State Adm Cultural Heritage SACH, Key Res Ctr Surface Monitoring & Anal Cultural Re, Tianjin 300350, Peoples R China
基金
中国国家自然科学基金;
关键词
Smoothing methods; Laplace equations; Privacy; Differential privacy; Stochastic processes; Optimization; Convergence; Alternating direction method of multipliers (ADMM); differential privacy; variance reduction; Laplacian smoothing; utility guarantees;
D O I
10.1109/TIFS.2022.3170271
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Many machine learning tasks such as structured sparse coding and multi-task learning can be converted into an equality constrained optimization problem. The stochastic alternating direction method of multipliers (SADMM) is a popular algorithm to solve such large-scale problems, and has been successfully used in many real-world applications. However, existing SADMMs fail to take into consideration an important issue in their designs, i.e., protecting sensitive information. To address this challenging issue, this paper proposes a novel differential privacy stochastic ADMM framework for solving equality constrained machine learning problems. In particular, to further lift the utility in privacy-preserving equality constrained optimization, a Laplacian smoothing operation is also introduced into our differential privacy ADMM framework, and it can smooth out the Gaussian noise used in the Gaussian mechanism. Then we propose an efficient differentially private variance reduced stochastic ADMM (DP-VRADMM) algorithm with Laplacian smoothing for both strongly convex and general convex objectives. As a by-product, we also present a new differentially private stochastic ADMM algorithm with DP guarantees. In theory, we provide both private guarantees and utility guarantees for the proposed algorithms, which show that Laplacian smoothing can improve the utility bounds of our algorithms. Experimental results on real-world datasets verify our theoretical results and the effectiveness of our algorithms.
引用
收藏
页码:1814 / 1826
页数:13
相关论文
共 45 条
[1]  
[Anonymous], 2013, ADV NEURAL INFORM PR
[2]  
[Anonymous], 2013, PMLR
[3]  
[Anonymous], 2017, NIPS
[4]  
Banerjee A., 2012, P INT C MACH LEARN E, P1699
[5]  
Banerjee O, 2008, J MACH LEARN RES, V9, P485
[6]   Private Empirical Risk Minimization: Efficient Algorithms and Tight Error Bounds [J].
Bassily, Raef ;
Smith, Adam ;
Thakurta, Abhradeep .
2014 55TH ANNUAL IEEE SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS 2014), 2014, :464-473
[7]   Distributed optimization and statistical learning via the alternating direction method of multipliers [J].
Boyd S. ;
Parikh N. ;
Chu E. ;
Peleato B. ;
Eckstein J. .
Foundations and Trends in Machine Learning, 2010, 3 (01) :1-122
[8]  
Chaudhuri K, 2011, J MACH LEARN RES, V12, P1069
[9]   Renyi Differentially Private ADMM for Non-Smooth Regularized Optimization [J].
Chen, Chen ;
Lee, Jaewoo .
PROCEEDINGS OF THE TENTH ACM CONFERENCE ON DATA AND APPLICATION SECURITY AND PRIVACY, CODASPY 2020, 2020, :319-328
[10]  
Ding JH, 2019, IEEE INT CONF BIG DA, P1302, DOI 10.1109/BigData47090.2019.9005716