Understanding Implicit Regularization in Over-Parameterized Single Index Model

被引:7
作者
Fan, Jianqing [1 ]
Yang, Zhuoran [2 ]
Yu, Mengxin [1 ]
机构
[1] Princeton Univ, Dept Operat Res & Financial Engn, Princeton, NJ 08544 USA
[2] Yale Univ, Dept Stat & Data Sci, New Haven, CT 06520 USA
关键词
High-dimensional models; Implicit regularization; Over-parameterization; Single-index models; COVARIANCE ESTIMATION; VARIABLE SELECTION; PHASE RETRIEVAL; MATRIX; LASSO;
D O I
10.1080/01621459.2022.2044824
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
In this article, we leverage over-parameterization to design regularization-free algorithms for the high-dimensional single index model and provide theoretical guarantees for the induced implicit regularization phenomenon. Specifically, we study both vector and matrix single index models where the link function is nonlinear and unknown, the signal parameter is either a sparse vector or a low-rank symmetric matrix, and the response variable can be heavy-tailed. To gain a better understanding of the role played by implicit regularization without excess technicality, we assume that the distribution of the covariates is known a priori. For both the vector and matrix settings, we construct an over-parameterized least-squares loss function by employing the score function transform and a robust truncation step designed specifically for heavy-tailed data. We propose to estimate the true parameter by applying regularization-free gradient descent to the loss function. When the initialization is close to the origin and the stepsize is sufficiently small, we prove that the obtained solution achieves minimax optimal statistical rates of convergence in both the vector and matrix cases. In addition, our experimental results support our theoretical findings and also demonstrate that our methods empirically outperform classical methods with explicit regularization in terms of both 1'2-statistical rate and variable selection consistency. Supplementary materials for this article are available online.
引用
收藏
页码:2315 / 2328
页数:14
相关论文
共 60 条
[1]  
Arora S., 2019, ADV NEURAL INFORM PR, V32
[2]   Deep Reinforcement Learning A brief survey [J].
Arulkumaran, Kai ;
Deisenroth, Marc Peter ;
Brundage, Miles ;
Bharath, Anil Anthony .
IEEE SIGNAL PROCESSING MAGAZINE, 2017, 34 (06) :26-38
[3]   EMPIRICAL RISK MINIMIZATION FOR HEAVY-TAILED LOSSES [J].
Brownlees, Christian ;
Joly, Emilien ;
Lugosi, Gabor .
ANNALS OF STATISTICS, 2015, 43 (06) :2507-2536
[4]   The restricted isometry property and its implications for compressed sensing [J].
Candes, Emmanuel J. .
COMPTES RENDUS MATHEMATIQUE, 2008, 346 (9-10) :589-592
[5]   Phase Retrieval via Matrix Completion [J].
Candes, Emmanuel J. ;
Eldar, Yonina C. ;
Strohmer, Thomas ;
Voroninski, Vladislav .
SIAM REVIEW, 2015, 57 (02) :225-251
[6]   Challenging the empirical mean and empirical variance: A deviation study [J].
Catoni, Olivier .
ANNALES DE L INSTITUT HENRI POINCARE-PROBABILITES ET STATISTIQUES, 2012, 48 (04) :1148-1185
[7]  
Dauphin YN, 2014, ADV NEUR IN, V27
[8]   Compressed sensing [J].
Donoho, DL .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2006, 52 (04) :1289-1306
[9]  
Fan JQ, 2021, ANN STAT, V49, P1239, DOI [10.1214/20-AOS1980, 10.1214/20-aos1980]
[10]  
Fan JQ, 2021, STAT SCI, V36, P303, DOI [10.1214/20-sts785, 10.1214/20-STS785]