Dual extrapolation for sparse generalized linear models

被引:0
|
作者
Massias, Mathurin [1 ]
Vaiter, Samuel [2 ]
Gramfort, Alexandre [1 ]
Salmon, Joseph [3 ]
机构
[1] Université Paris-Saclay, Inria, CEA, Palaiseau, France
[2] CNRS, Institut de Mathématiques de Bourgogne, Dijon,21078, France
[3] IMAG, Univ Montpellier, CNRS, Montpellier,34095, France
来源
关键词
Convex optimization - Regression analysis - Gradient methods;
D O I
暂无
中图分类号
学科分类号
摘要
Generalized Linear Models (GLM) form a wide class of regression and classification models, where prediction is a function of a linear combination of the input variables. For statistical inference in high dimension, sparsity inducing regularizations have proven to be useful while offering statistical guarantees. However, solving the resulting optimization problems can be challenging: even for popular iterative algorithms such as coordinate descent, one needs to loop over a large number of variables. To mitigate this, techniques known as screening rules and working sets diminish the size of the optimization problem at hand, either by progressively removing variables, or by solving a growing sequence of smaller problems. For both techniques, significant variables are identified thanks to convex duality arguments. In this paper, we show that the dual iterates of a GLM exhibit a Vector AutoRegressive (VAR) behavior after sign identification, when the primal problem is solved with proximal gradient descent or cyclic coordinate descent. Exploiting this regularity, one can construct dual points that offer tighter certificates of optimality, enhancing the performance of screening rules and working set algorithms. © 2020 Mathurin Massias, Samuel Vaiter, Alexandre Gramfort and Joseph Salmon. License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are provided at http://jmlr.org/papers/v21/19-587.html.
引用
收藏
相关论文
共 50 条
  • [1] Dual Extrapolation for Sparse Generalized Linear Models
    Massias, Mathurin
    Vaiter, Samuel
    Gramfort, Alexandre
    Salmon, Joseph
    JOURNAL OF MACHINE LEARNING RESEARCH, 2020, 21 : 1 - 33
  • [2] Posterior contraction in sparse generalized linear models
    Jeong, Seonghyun
    Ghosal, Subhashis
    BIOMETRIKA, 2021, 108 (02) : 367 - 379
  • [3] Bayesian inference for sparse generalized linear models
    Seeger, Matthias
    Gerwinn, Sebastian
    Bethge, Matthias
    MACHINE LEARNING: ECML 2007, PROCEEDINGS, 2007, 4701 : 298 - +
  • [4] Fast Sparse Classification for Generalized Linear and Additive Models
    Liu, Jiachang
    Zhong, Chudi
    Seltzer, Margo
    Rudin, Cynthia
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151
  • [5] Goodness of fit of generalized linear models to sparse data
    Paul, SR
    Deng, DL
    JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES B-STATISTICAL METHODOLOGY, 2000, 62 : 323 - 333
  • [6] Sparse principal component regression for generalized linear models
    Kawano, Shuichi
    Fujisawa, Hironori
    Takada, Toyoyuki
    Shiroishi, Toshihiko
    COMPUTATIONAL STATISTICS & DATA ANALYSIS, 2018, 124 : 180 - 196
  • [7] Robust prediction and extrapolation designs for misspecified generalized linear regression models
    Wiens, Douglas P.
    Xu, Xiaojian
    JOURNAL OF STATISTICAL PLANNING AND INFERENCE, 2008, 138 (01) : 30 - 46
  • [8] On assessing goodness of fit of generalized linear models to sparse data
    Farrington, CP
    JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES B-METHODOLOGICAL, 1996, 58 (02): : 349 - 360
  • [9] dglars: An R Package to Estimate Sparse Generalized Linear Models
    Augugliaro, Luigi
    Mineo, Angelo M.
    Wit, Ernst C.
    JOURNAL OF STATISTICAL SOFTWARE, 2014, 59 (08): : 1 - 40
  • [10] Integrative factor-adjusted sparse generalized linear models
    Xu, Fuzhi
    Ma, Shuangge
    Zhang, Qingzhao
    JOURNAL OF STATISTICAL COMPUTATION AND SIMULATION, 2025, 95 (04) : 764 - 780