Variable Selection in High-dimensional Varying-coefficient Models with Global Optimality

被引:0
|
作者
Xue, Lan [1 ]
Qu, Annie [2 ]
机构
[1] Oregon State Univ, Dept Stat, Corvallis, OR 97331 USA
[2] Univ Illinois, Dept Stat, Champaign, IL 61820 USA
基金
美国国家科学基金会;
关键词
coordinate decent algorithm; difference convex programming; L-0-; regularization; large-p small-n; model selection; nonparametric function; oracle property; truncated L-1 penalty; NONCONCAVE PENALIZED LIKELIHOOD; LINEAR-MODELS; REGRESSION; SHRINKAGE; INFERENCE;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The varying-coefficient model is flexible and powerful for modeling the dynamic changes of regression coefficients. It is important to identify significant covariates associated with response variables, especially for high-dimensional settings where the number of covariates can be larger than the sample size. We consider model selection in the high-dimensional setting and adopt difference convex programming to approximate the L-0 penalty, and we investigate the global optimality properties of the varying-coefficient estimator. The challenge of the variable selection problem here is that the dimension of the nonparametric form for the varying-coefficient modeling could be infinite, in addition to dealing with the high-dimensional linear covariates. We show that the proposed varying-coefficient estimator is consistent, enjoys the oracle property and achieves an optimal convergence rate for the non-zero nonparametric components for high-dimensional data. Our simulations and numerical examples indicate that the difference convex algorithm is efficient using the coordinate decent algorithm, and is able to select the true model at a higher frequency than the least absolute shrinkage and selection operator (LASSO), the adaptive LASSO and the smoothly clipped absolute deviation (SCAD) approaches.
引用
收藏
页码:1973 / 1998
页数:26
相关论文
共 50 条