Efficient Sparse Least Absolute Deviation Regression With Differential Privacy

被引:1
作者
Liu, Weidong [1 ]
Mao, Xiaojun [2 ]
Zhang, Xiaofei [3 ]
Zhang, Xin [4 ]
机构
[1] Shanghai Jiao Tong Univ, Sch Math Sci, MoE Key Lab Artificial Intelligence, Shanghai 200240, Peoples R China
[2] Shanghai Jiao Tong Univ, Sch Math Sci, Key Lab Sci & Engn Comp, Minist Educ, Shanghai 200240, Peoples R China
[3] Zhongnan Univ Econ & Law, Sch Stat & Math, Wuhan 430073, Peoples R China
[4] Iowa State Univ, Dept Stat, Ames, IA 50011 USA
关键词
Robust regression; sparse learning; differential privacy; least absolute deviation; QUANTILE REGRESSION;
D O I
10.1109/TIFS.2023.3349054
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
In recent years, privacy-preserving machine learning algorithms have attracted increasing attention because of their important applications in many scientific fields. However, in the literature, most privacy-preserving algorithms demand learning objectives to be strongly convex and Lipschitz smooth, which thus cannot cover a wide class of robust loss functions (e.g., quantile/least absolute loss). In this work, we aim to develop a fast privacy-preserving learning solution for a sparse robust regression problem. Our learning loss consists of a robust least absolute loss and an & ell;(1) sparse penalty term. To fast solve the non-smooth loss under a given privacy budget, we develop a Fast Robust And Privacy-Preserving Estimation (FRAPPE) algorithm for least absolute deviation regression. Our algorithm achieves a fast estimation by reformulating the sparse LAD problem as a penalized least square estimation problem and adopts a three-stage noise injection to guarantee the (& varepsilon;,delta) -differential privacy. We show that our algorithm can achieve better privacy and statistical accuracy trade-off compared with the state-of-the-art privacy-preserving regression algorithms. In the end, we conduct experiments to verify the efficiency of our proposed FRAPPE algorithm.
引用
收藏
页码:2328 / 2339
页数:12
相关论文
共 53 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]  
Balle B, 2018, ADV NEUR IN, V31
[3]   l1-PENALIZED QUANTILE REGRESSION IN HIGH-DIMENSIONAL SPARSE MODELS [J].
Belloni, Alexandre ;
Chernozhukov, Victor .
ANNALS OF STATISTICS, 2011, 39 (01) :82-130
[4]   Concentrated Differential Privacy: Simplifications, Extensions, and Lower Bounds [J].
Bun, Mark ;
Steinke, Thomas .
THEORY OF CRYPTOGRAPHY, TCC 2016-B, PT I, 2016, 9985 :635-658
[5]   THE COST OF PRIVACY: OPTIMAL RATES OF CONVERGENCE FOR PARAMETER ESTIMATION WITH DIFFERENTIAL PRIVACY [J].
Cai, T. Tony ;
Wang, Yichen ;
Zhang, Linjun .
ANNALS OF STATISTICS, 2021, 49 (05) :2825-2850
[6]  
Chaudhuri K, 2011, J MACH LEARN RES, V12, P1069
[7]  
Duchi JC, 2018, J AM STAT ASSOC, V113, P182, DOI 10.1080/01621459.2017.1389735
[8]   The Algorithmic Foundations of Differential Privacy [J].
Dwork, Cynthia ;
Roth, Aaron .
FOUNDATIONS AND TRENDS IN THEORETICAL COMPUTER SCIENCE, 2013, 9 (3-4) :211-406
[9]  
Dwork W. Su, 2021, J. Privacy Confidentiality, V11, P1
[10]   STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION [J].
Fan, Jianqing ;
Xue, Lingzhou ;
Zou, Hui .
ANNALS OF STATISTICS, 2014, 42 (03) :819-849