FUNDAMENTAL BARRIERS TO HIGH-DIMENSIONAL REGRESSION WITH CONVEX PENALTIES

被引:19
作者
Celentano, Michael [1 ]
Montanari, Andrea [1 ]
机构
[1] Stanford Univ, Dept Stat, Stanford, CA 94305 USA
关键词
High-dimensional regression; M-estimation; computational to statistical gaps; approximate message passing; convex; penalty; MESSAGE-PASSING ALGORITHMS; STATISTICAL ESTIMATION; VARIABLE SELECTION; PHASE-TRANSITIONS; ROBUST REGRESSION; STATE EVOLUTION; LASSO; SLOPE; INFORMATION; SHRINKAGE;
D O I
10.1214/21-AOS2100
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
In high-dimensional regression, we attempt to estimate a parameter vector beta(0) is an element of R-P from n less than or similar to p observations {(y(i) , x(i))}(i <= n) , where x(i) is an element of R-P is a vector of predictors and y(i) is a response variable. A well-established approach uses convex regularizers to promote specific structures (e.g., sparsity) of the estimate (beta) over cap while allowing for practical algorithms. Theoretical analysis implies that convex penalization schemes have nearly optimal estimation properties in certain settings. However, in general the gaps between statistically optimal estimation (with unbounded computational resources) and convex methods are poorly understood. We show that when the statistican has very simple structural information about the distribution of the entries of beta(0), a large gap frequently exists between the best performance achieved by any convex regularizer satisfying a mild technical condition and either: (i) the optimal statistical error or (ii) the statistical error achieved by optimal approximate message passing algorithms. Remarkably, a gap occurs at high enough signal-to-noise ratio if and only if the distribution of the coordinates of beta(0) is not log-concave. These conclusions follow from an analysis of standard Gaussian designs. Our lower bounds are expected to be generally tight, and we prove tightness under certain conditions.
引用
收藏
页码:170 / 196
页数:27
相关论文
共 62 条
[31]  
FIGUEIREDO M. A. T., 2014, SPARSE ESTIMATION ST
[32]  
GAMARNIK D., 2017, SPARSE HIGH DIMENSIO
[33]  
GAMARNIK D., P MACHINE LEARNING R, V65, P948
[34]   Estimation in Gaussian Noise: Properties of the Minimum Mean-Square Error [J].
Guo, Dongning ;
Wu, Yihong ;
Shamai , Shlomo ;
Verdu, Sergio .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2011, 57 (04) :2371-2385
[35]   State evolution for general approximate message passing algorithms, with applications to spatial coupling [J].
Javanmard, Adel ;
Montanari, Andrea .
INFORMATION AND INFERENCE-A JOURNAL OF THE IMA, 2013, 2 (02) :115-144
[36]   Applications of the Lindeberg Principle in Communications and Statistical Learning [J].
Korada, Satish Babu ;
Montanari, Andrea .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2011, 57 (04) :2440-2450
[37]   Fundamental limits of symmetric low-rank matrix estimation [J].
Lelarge, Marc ;
Miolane, Leo .
PROBABILITY THEORY AND RELATED FIELDS, 2019, 173 (3-4) :859-929
[38]   Optimization-Based AMP for Phase Retrieval: The Impact of Initialization and l2 Regularization [J].
Ma, Junjie ;
Xu, Ji ;
Maleki, Arian .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2019, 65 (06) :3600-3629
[39]  
Mezard M., 2009, Information, physics and computation
[40]  
MIGNACCO F., P MACHINE LEARNING R, V119, P6874