Renyi Differentially Private ADMM for Non-Smooth Regularized Optimization

被引:2
作者
Chen, Chen [1 ]
Lee, Jaewoo [1 ]
机构
[1] Univ Georgia, Athens, GA 30602 USA
来源
PROCEEDINGS OF THE TENTH ACM CONFERENCE ON DATA AND APPLICATION SECURITY AND PRIVACY, CODASPY 2020 | 2020年
关键词
differential privacy; convex optimization; ADMM; regularization; VARIABLE SELECTION; NOISE; PATH;
D O I
10.1145/3374664.3375733
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper we consider the problem of minimizing composite objective functions consisting of a convex differentiable loss function plus a non-smooth regularization term, such as L-1 norm or nuclear norm, under Renyi differential privacy (RDP). To solve the problem, we propose two stochastic alternating direction method of multipliers (ADMM) algorithms: ssADMM based on gradient perturbation and mpADMM based on output perturbation. Both algorithms decompose the original problem into sub-problems that have closed-form solutions. The first algorithm, ssADMM, applies the recent privacy amplification result for RDP to reduce the amount of noise to add. The second algorithm, mpADMM, numerically computes the sensitivity of ADMM variable updates and releases the updated parameter vector at the end of each epoch. We compare the performance of our algorithms with several baseline algorithms on both real and simulated datasets. Experimental results show that, in high privacy regimes (small.), ssADMM and mpADMM outperform baseline algorithms in terms of classification and feature selection performance, respectively.
引用
收藏
页码:319 / 328
页数:10
相关论文
共 44 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]  
[Anonymous], 2018, International conference on machine learning
[3]  
Azadi S, 2016, Arxiv, DOI arXiv:1511.07069
[4]   Private Empirical Risk Minimization: Efficient Algorithms and Tight Error Bounds [J].
Bassily, Raef ;
Smith, Adam ;
Thakurta, Abhradeep .
2014 55TH ANNUAL IEEE SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS 2014), 2014, :464-473
[5]  
Bian YataoAn., 2019, IEEE transactions on neural networks and learning systems
[6]   Distributed optimization and statistical learning via the alternating direction method of multipliers [J].
Boyd S. ;
Parikh N. ;
Chu E. ;
Peleato B. ;
Eckstein J. .
Foundations and Trends in Machine Learning, 2010, 3 (01) :1-122
[7]   Concentrated Differential Privacy: Simplifications, Extensions, and Lower Bounds [J].
Bun, Mark ;
Steinke, Thomas .
THEORY OF CRYPTOGRAPHY, TCC 2016-B, PT I, 2016, 9985 :635-658
[8]   Plug-and-Play ADMM for Image Restoration: Fixed-Point Convergence and Applications [J].
Chan, Stanley H. ;
Wang, Xiran ;
Elgendy, Omar A. .
IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, 2017, 3 (01) :84-98
[9]   LIBSVM: A Library for Support Vector Machines [J].
Chang, Chih-Chung ;
Lin, Chih-Jen .
ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2011, 2 (03)
[10]  
Chaudhuri K, 2011, J MACH LEARN RES, V12, P1069