Improving Lasso for model selection and prediction

被引:3
作者
Pokarowski, Piotr [1 ]
Rejchel, Wojciech [2 ]
Soltys, Agnieszka [1 ]
Frej, Michal [1 ]
Mielniczuk, Jan [3 ,4 ]
机构
[1] Univ Warsaw, Inst Appl Math & Mech, Warsaw, Poland
[2] Nicolaus Copernicus Univ, Fac Math & Comp Sci, Chopina 12-18, PL-87100 Torun, Poland
[3] Polish Acad Sci, Inst Comp Sci, Warsaw, Poland
[4] Warsaw Univ Technol, Fac Math & Informat Sci, Warsaw, Poland
关键词
convex loss function; empirical process; generalized information criterion; high-dimensional regression; penalized estimation; selection consistency; NONCONVEX PENALIZED REGRESSION; VARIABLE SELECTION; CRITERIA; REGULARIZATION; LIKELIHOOD;
D O I
10.1111/sjos.12546
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
It is known that the Thresholded Lasso (TL), SCAD or MCP correct intrinsic estimation bias of the Lasso. In this paper we propose an alternative method of improving the Lasso for predictive models with general convex loss functions which encompass normal linear models, logistic regression, quantile regression, or support vector machines. For a given penalty we order the absolute values of the Lasso nonzero coefficients and then select the final model from a small nested family by the Generalized Information Criterion. We derive exponential upper bounds on the selection error of the method. These results confirm that, at least for normal linear models, our algorithm seems to be the benchmark for the theory of model selection as it is constructive, computationally efficient and leads to consistent model selection under weak assumptions. Constructivity of the algorithm means that, in contrast to the TL, SCAD or MCP, consistent selection does not rely on the unknown parameters as the cone invertibility factor. Instead, our algorithm only needs the sample size, the number of predictors and an upper bound on the noise parameter. We show in numerical experiments on synthetic and real-world datasets that an implementation of our algorithm is more accurate than implementations of studied concave regularizations. Our procedure is included in the R package DMRnet and available in the CRAN repository.
引用
收藏
页码:831 / 863
页数:33
相关论文
共 40 条
  • [11] Regularization Paths for Generalized Linear Models via Coordinate Descent
    Friedman, Jerome
    Hastie, Trevor
    Tibshirani, Rob
    [J]. JOURNAL OF STATISTICAL SOFTWARE, 2010, 33 (01): : 1 - 22
  • [12] A Prognostic DNA Signature for T1T2 Node-Negative Breast Cancer Patients
    Gravier, Eleonore
    Pierron, Gaelle
    Vincent-Salomon, Anne
    Gruel, Nadege
    Raynal, Virginie
    Savignoni, Alexia
    De Rycke, Yann
    Pierga, Jean-Yves
    Lucchesi, Carlo
    Reyal, Fabien
    Fourquet, Alain
    Roman-Roman, Sergio
    Radvanyi, Francois
    Sastre-Garau, Xavier
    Asselain, Bernard
    Delattre, Olivier
    [J]. GENES CHROMOSOMES & CANCER, 2010, 49 (12) : 1125 - 1134
  • [13] Genome-wide Methylation Profiles Reveal Quantitative Views of Human Aging Rates
    Hannum, Gregory
    Guinney, Justin
    Zhao, Ling
    Zhang, Li
    Hughes, Guy
    Sadda, SriniVas
    Klotzle, Brandy
    Bibikova, Marina
    Fan, Jian-Bing
    Gao, Yuan
    Deconde, Rob
    Chen, Menzies
    Rajapakse, Indika
    Friend, Stephen
    Ideker, Trey
    Zhang, Kang
    [J]. MOLECULAR CELL, 2013, 49 (02) : 359 - 367
  • [14] Huang J, 2012, J MACH LEARN RES, V13, P1839
  • [15] Tuning Parameter Selection for the Adaptive Lasso Using ERIC
    Hui, Francis K. C.
    Warton, David I.
    Foster, Scott D.
    [J]. JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2015, 110 (509) : 262 - 269
  • [16] Lasso penalized model selection criteria for high-dimensional multivariate linear regression analysis
    Katayama, Shota
    Imori, Shinpei
    [J]. JOURNAL OF MULTIVARIATE ANALYSIS, 2014, 132 : 138 - 150
  • [17] CONSISTENT MODEL SELECTION CRITERIA FOR QUADRATICALLY SUPPORTED RISKS
    Kim, Yongdai
    Jeon, Jong-June
    [J]. ANNALS OF STATISTICS, 2016, 44 (06) : 2467 - 2496
  • [18] Kim Y, 2012, J MACH LEARN RES, V13, P1037
  • [19] Ledoux M., 1991, Probability in Banach Spaces, Volume23 of Ergebnisse der Mathematik und ihrer Grenzgebiete (3), V23
  • [20] About the constants in Talagrand's concentration inequalities for empirical processes
    Massart, P
    [J]. ANNALS OF PROBABILITY, 2000, 28 (02) : 863 - 884