Robust reduced rank regression in a distributed setting

被引:0
作者
Xi Chen
Weidong Liu
Xiaojun Mao
机构
[1] New York University,Stern School of Business
[2] Shanghai Jiao Tong University,School of Mathematical Sciences and MoE Key Lab of Artificial Intelligence
[3] Fudan University,School of Data Science
来源
Science China Mathematics | 2022年 / 65卷
关键词
reduced rank regression; distributed estimation; quantile loss; rank recovery; 62H12;
D O I
暂无
中图分类号
学科分类号
摘要
This paper studies the reduced rank regression problem, which assumes a low-rank structure of the coefficient matrix, together with heavy-tailed noises. To address the heavy-tailed noise, we adopt the quantile loss function instead of commonly used squared loss. However, the non-smooth quantile loss brings new challenges to both the computation and development of statistical properties, especially when the data are large in size and distributed across different machines. To this end, we first transform the response variable and reformulate the problem into a trace-norm regularized least-square problem, which greatly facilitates the computation. Based on this formulation, we further develop a distributed algorithm. Theoretically, we establish the convergence rate of the obtained estimator and the theoretical guarantee for rank recovery. The simulation analysis is provided to demonstrate the effectiveness of our method.
引用
收藏
页码:1707 / 1730
页数:23
相关论文
共 82 条
[1]  
Ando R K(2005)A framework for learning predictive structures from multiple tasks and unlabeled data J Mach Learn Res 6 1817-1853
[2]  
Zhang T(2019)Divide and conquer in nonstandard problems and the super-efficiency phenomenon Ann Statist 47 720-757
[3]  
Banerjee M(2018)Distributed testing and estimation under sparse high dimensional models Ann Statist 46 1352-1382
[4]  
Durot C(2011)Optimal selection of reduced rank estimators of high-dimensional matrices Ann Statist 39 1282-1309
[5]  
Sen B(2010)A singular value thresholding algorithm for matrix completion SIAM J Optim 20 1956-1982
[6]  
Battey H(2011)Adaptive thresholding for sparse covariance matrix estimation J Amer Statist Assoc 106 672-684
[7]  
Fan J(2009)Exact matrix completion via convex optimization Found Comput Math 9 717-772
[8]  
Liu H(2011)Reduced rank stochastic regression with a sparse singular value decomposition J R Stat Soc Ser B Stat Methodol 74 203-221
[9]  
Bunea F(2013)Reduced rank regression via adaptive nuclear norm penalization Biometrika 100 901-920
[10]  
She Y(2020)Distributed high-dimensional regression under a quantile loss function J Mach Learn Res 21 182-3273