The Contextual Lasso: Sparse Linear Models via Deep Neural Networks

被引:0
|
作者
Thompson, Ryan [1 ,2 ]
Dezfouli, Amir [3 ]
Kohn, Robert [1 ]
机构
[1] Univ New South Wales, Sydney, NSW, Australia
[2] CSIROs Data61, Eveleigh, Australia
[3] BIMLOGIQ, Sydney, NSW, Australia
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023) | 2023年
关键词
REGRESSION; REGULARIZATION; SELECTION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sparse linear models are one of several core tools for interpretable machine learning, a field of emerging importance as predictive models permeate decision-making in many domains. Unfortunately, sparse linear models are far less flexible as functions of their input features than black-box models like deep neural networks. With this capability gap in mind, we study a not-uncommon situation where the input features dichotomize into two groups: explanatory features, which are candidates for inclusion as variables in an interpretable model, and contextual features, which select from the candidate variables and determine their effects. This dichotomy leads us to the contextual lasso, a new statistical estimator that fits a sparse linear model to the explanatory features such that the sparsity pattern and coefficients vary as a function of the contextual features. The fitting process learns this function nonparametrically via a deep neural network. To attain sparse coefficients, we train the network with a novel lasso regularizer in the form of a projection layer that maps the network's output onto the space of l(1)-constrained linear models. An extensive suite of experiments on real and synthetic data suggests that the learned models, which remain highly transparent, can be sparser than the regular lasso without sacrificing the predictive power of a standard deep neural network.
引用
收藏
页数:22
相关论文
共 50 条
  • [1] Seagull: lasso, group lasso and sparse-group lasso regularization for linear regression models via proximal gradient descent
    Klosa, Jan
    Simon, Noah
    Westermark, Pal Olof
    Liebscher, Volkmar
    Wittenburg, Doerte
    BMC BIOINFORMATICS, 2020, 21 (01)
  • [2] Heterogeneous Feature Selection With Multi-Modal Deep Neural Networks and Sparse Group LASSO
    Zhao, Lei
    Hu, Qinghua
    Wang, Wenwu
    IEEE TRANSACTIONS ON MULTIMEDIA, 2015, 17 (11) : 1936 - 1948
  • [3] Multiple Change-Points Estimation in Linear Regression Models via Sparse Group Lasso
    Zhang, Bingwen
    Geng, Jun
    Lai, Lifeng
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2015, 63 (09) : 2209 - 2224
  • [4] Mixture of Linear Models Co-supervised by Deep Neural Networks
    Seo, Beomseok
    Lin, Lin
    Li, Jia
    JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS, 2022, 31 (04) : 1303 - 1317
  • [5] Estimating sparse models from multivariate discrete data via transformed Lasso
    Roos, Teemu
    Yu, Bin
    2009 INFORMATION THEORY AND APPLICATIONS WORKSHOP, 2009, : 287 - +
  • [6] Structured Compression of Deep Neural Networks with Debiased Elastic Group LASSO
    Oyedotun, Oyebade K.
    Aouada, Djamila
    Ottersten, Bjoern
    2020 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2020, : 2266 - 2275
  • [7] Sparse Markov Source Estimation via Transformed Lasso
    Roos, Teemu
    Yu, Bin
    ITW: 2009 IEEE INFORMATION THEORY WORKSHOP ON NETWORKING AND INFORMATION THEORY, 2009, : 241 - +
  • [8] Independently Interpretable Lasso for Generalized Linear Models
    Takada, Masaaki
    Suzuki, Taiji
    Fujisawa, Hironori
    NEURAL COMPUTATION, 2020, 32 (06) : 1168 - 1221
  • [9] Spectral Deconfounding via Perturbed Sparse Linear Models
    Cevid, Domagoj
    Buhlmann, Peter
    Meinshausen, Nicolai
    JOURNAL OF MACHINE LEARNING RESEARCH, 2020, 21
  • [10] DebiNet: Debiasing Linear Models with Nonlinear Overparameterized Neural Networks
    Xu, Shiyun
    Bu, Zhiqi
    24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS), 2021, 130