The Contextual Lasso: Sparse Linear Models via Deep Neural Networks

被引:0
|
作者
Thompson, Ryan [1 ,2 ]
Dezfouli, Amir [3 ]
Kohn, Robert [1 ]
机构
[1] Univ New South Wales, Sydney, NSW, Australia
[2] CSIROs Data61, Eveleigh, Australia
[3] BIMLOGIQ, Sydney, NSW, Australia
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023) | 2023年
关键词
REGRESSION; REGULARIZATION; SELECTION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sparse linear models are one of several core tools for interpretable machine learning, a field of emerging importance as predictive models permeate decision-making in many domains. Unfortunately, sparse linear models are far less flexible as functions of their input features than black-box models like deep neural networks. With this capability gap in mind, we study a not-uncommon situation where the input features dichotomize into two groups: explanatory features, which are candidates for inclusion as variables in an interpretable model, and contextual features, which select from the candidate variables and determine their effects. This dichotomy leads us to the contextual lasso, a new statistical estimator that fits a sparse linear model to the explanatory features such that the sparsity pattern and coefficients vary as a function of the contextual features. The fitting process learns this function nonparametrically via a deep neural network. To attain sparse coefficients, we train the network with a novel lasso regularizer in the form of a projection layer that maps the network's output onto the space of l(1)-constrained linear models. An extensive suite of experiments on real and synthetic data suggests that the learned models, which remain highly transparent, can be sparser than the regular lasso without sacrificing the predictive power of a standard deep neural network.
引用
收藏
页数:22
相关论文
共 50 条
  • [31] Square-root lasso: pivotal recovery of sparse signals via conic programming
    Belloni, A.
    Chernozhukov, V.
    Wang, L.
    BIOMETRIKA, 2011, 98 (04) : 791 - 806
  • [32] Sparse EEG/MEG source estimation via a group lasso
    Lim, Michael
    Ales, Justin M.
    Cottereau, Benoit R.
    Hastie, Trevor
    Norcia, Anthony M.
    PLOS ONE, 2017, 12 (06):
  • [33] Nonlinear Variable Selection via Deep Neural Networks
    Chen, Yao
    Gao, Qingyi
    Liang, Faming
    Wang, Xiao
    JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS, 2021, 30 (02) : 484 - 492
  • [34] Sparse Linear Isotonic Models
    Chen, Sheng
    Banerjee, Arindam
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 84, 2018, 84
  • [35] Neural generators of sparse local linear models for achieving both accuracy and interpretability
    Yoshikawa, Yuya
    Iwata, Tomoharu
    INFORMATION FUSION, 2022, 81 : 116 - 128
  • [36] LOss-Based SensiTivity rEgulaRization: Towards deep sparse neural networks
    Tartaglione, Enzo
    Bragagnolo, Andrea
    Fiandrotti, Attilio
    Grangetto, Marco
    NEURAL NETWORKS, 2022, 146 : 230 - 237
  • [37] Deep Feature Selection using an Enhanced Sparse Group Lasso Algorithm
    Farokhmanesh, Fatemeh
    Sadeghi, Mohammad Taghi
    2019 27TH IRANIAN CONFERENCE ON ELECTRICAL ENGINEERING (ICEE 2019), 2019, : 1549 - 1552
  • [38] The Discrete Dantzig Selector: Estimating Sparse Linear Models via Mixed Integer Linear Optimization
    Mazumder, Rahul
    Radchenko, Peter
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2017, 63 (05) : 3053 - 3075
  • [39] Fast Sparse Classification for Generalized Linear and Additive Models
    Liu, Jiachang
    Zhong, Chudi
    Seltzer, Margo
    Rudin, Cynthia
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151
  • [40] Sparse principal component regression for generalized linear models
    Kawano, Shuichi
    Fujisawa, Hironori
    Takada, Toyoyuki
    Shiroishi, Toshihiko
    COMPUTATIONAL STATISTICS & DATA ANALYSIS, 2018, 124 : 180 - 196