The Contextual Lasso: Sparse Linear Models via Deep Neural Networks

被引:0
|
作者
Thompson, Ryan [1 ,2 ]
Dezfouli, Amir [3 ]
Kohn, Robert [1 ]
机构
[1] Univ New South Wales, Sydney, NSW, Australia
[2] CSIROs Data61, Eveleigh, Australia
[3] BIMLOGIQ, Sydney, NSW, Australia
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023) | 2023年
关键词
REGRESSION; REGULARIZATION; SELECTION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sparse linear models are one of several core tools for interpretable machine learning, a field of emerging importance as predictive models permeate decision-making in many domains. Unfortunately, sparse linear models are far less flexible as functions of their input features than black-box models like deep neural networks. With this capability gap in mind, we study a not-uncommon situation where the input features dichotomize into two groups: explanatory features, which are candidates for inclusion as variables in an interpretable model, and contextual features, which select from the candidate variables and determine their effects. This dichotomy leads us to the contextual lasso, a new statistical estimator that fits a sparse linear model to the explanatory features such that the sparsity pattern and coefficients vary as a function of the contextual features. The fitting process learns this function nonparametrically via a deep neural network. To attain sparse coefficients, we train the network with a novel lasso regularizer in the form of a projection layer that maps the network's output onto the space of l(1)-constrained linear models. An extensive suite of experiments on real and synthetic data suggests that the learned models, which remain highly transparent, can be sparser than the regular lasso without sacrificing the predictive power of a standard deep neural network.
引用
收藏
页数:22
相关论文
共 50 条
  • [21] Multi-Task Learning for Compositional Data via Sparse Network Lasso
    Okazaki, Akira
    Kawano, Shuichi
    ENTROPY, 2022, 24 (12)
  • [22] Linearized alternating direction method of multipliers for sparse group and fused LASSO models
    Li, Xinxin
    Mo, Lili
    Yuan, Xiaoming
    Zhang, Jianzhong
    COMPUTATIONAL STATISTICS & DATA ANALYSIS, 2014, 79 : 203 - 221
  • [23] Learning regularization parameters of inverse problems via deep neural networks
    Afkham, Babak Maboudi
    Chung, Julianne
    Chung, Matthias
    INVERSE PROBLEMS, 2021, 37 (10)
  • [24] Learning sparse deep neural networks with a spike-and-slab prior
    Sun, Yan
    Song, Qifan
    Liang, Faming
    STATISTICS & PROBABILITY LETTERS, 2022, 180
  • [25] Distributed Bayesian Piecewise Sparse Linear Models
    Asahara, Masato
    Fujimaki, Ryohei
    2017 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2017, : 883 - 888
  • [26] Dual Extrapolation for Sparse Generalized Linear Models
    Massias, Mathurin
    Vaiter, Samuel
    Gramfort, Alexandre
    Salmon, Joseph
    JOURNAL OF MACHINE LEARNING RESEARCH, 2020, 21 : 1 - 33
  • [27] A Bregman Learning Framework for Sparse Neural Networks
    Bungert, Leon
    Roith, Tim
    Tenbrinck, Daniel
    Burger, Martin
    JOURNAL OF MACHINE LEARNING RESEARCH, 2022, 23
  • [28] Sparse calibration based on adaptive lasso penalty for computer models
    Sun, Yang
    Fang, Xiangzhong
    COMMUNICATIONS IN STATISTICS-SIMULATION AND COMPUTATION, 2024, 53 (10) : 4738 - 4752
  • [29] ESTIMATION OF SPARSE FUNCTIONAL ADDITIVE MODELS WITH ADAPTIVE GROUP LASSO
    Sang, Peijun
    Wang, Liangliang
    Cao, Jiguo
    STATISTICA SINICA, 2020, 30 (03) : 1191 - 1211
  • [30] Estimation of covariance matrix via the sparse Cholesky factor with lasso
    Chang, Changgee
    Tsay, Ruey S.
    JOURNAL OF STATISTICAL PLANNING AND INFERENCE, 2010, 140 (12) : 3858 - 3873