The Contextual Lasso: Sparse Linear Models via Deep Neural Networks

被引:0
|
作者
Thompson, Ryan [1 ,2 ]
Dezfouli, Amir [3 ]
Kohn, Robert [1 ]
机构
[1] Univ New South Wales, Sydney, NSW, Australia
[2] CSIROs Data61, Eveleigh, Australia
[3] BIMLOGIQ, Sydney, NSW, Australia
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023) | 2023年
关键词
REGRESSION; REGULARIZATION; SELECTION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sparse linear models are one of several core tools for interpretable machine learning, a field of emerging importance as predictive models permeate decision-making in many domains. Unfortunately, sparse linear models are far less flexible as functions of their input features than black-box models like deep neural networks. With this capability gap in mind, we study a not-uncommon situation where the input features dichotomize into two groups: explanatory features, which are candidates for inclusion as variables in an interpretable model, and contextual features, which select from the candidate variables and determine their effects. This dichotomy leads us to the contextual lasso, a new statistical estimator that fits a sparse linear model to the explanatory features such that the sparsity pattern and coefficients vary as a function of the contextual features. The fitting process learns this function nonparametrically via a deep neural network. To attain sparse coefficients, we train the network with a novel lasso regularizer in the form of a projection layer that maps the network's output onto the space of l(1)-constrained linear models. An extensive suite of experiments on real and synthetic data suggests that the learned models, which remain highly transparent, can be sparser than the regular lasso without sacrificing the predictive power of a standard deep neural network.
引用
收藏
页数:22
相关论文
共 50 条
  • [41] Variable selection in multivariate linear models for functional data via sparse regularization
    Matsui, Hidetoshi
    Umezu, Yuta
    JAPANESE JOURNAL OF STATISTICS AND DATA SCIENCE, 2020, 3 (02) : 453 - 467
  • [42] Convergence and sparsity of Lasso and group Lasso in high-dimensional generalized linear models
    Wang, Lichun
    You, Yuan
    Lian, Heng
    STATISTICAL PAPERS, 2015, 56 (03) : 819 - 828
  • [43] DEEP NEURAL NETWORKS FOR NONPARAMETRIC INTERACTION MODELS WITH DIVERGING DIMENSION
    Bhattacharya, Sohom
    Fan, Jianqing
    Mukherjee, Debarghya
    ANNALS OF STATISTICS, 2024, 52 (06) : 2738 - 2766
  • [44] Sparse functional linear models via calibrated concave-convex procedure
    Lee, Young Joo
    Jeon, Yongho
    JOURNAL OF THE KOREAN STATISTICAL SOCIETY, 2024, 53 (01) : 189 - 207
  • [45] Square-Root LASSO for High-Dimensional Sparse Linear Systems with Weakly Dependent Errors
    Xie, Fang
    Xiao, Zhijie
    JOURNAL OF TIME SERIES ANALYSIS, 2018, 39 (02) : 212 - 238
  • [46] NeuralLasso: Neural Networks Meet Lasso in Genomic Prediction
    Mathew, Boby
    Hauptmann, Andreas
    Leon, Jens
    Sillanpaeae, Mikko J.
    FRONTIERS IN PLANT SCIENCE, 2022, 13
  • [47] Brain Tissue Segmentation via Deep Convolutional Neural Networks
    Bikku, Thulasi
    Karthik, Jayavarapu
    Rao, Ganga Rama Koteswara
    Sree, K. P. N. V. Satya
    Srinivas, P. V. V. S.
    Prasad, Chitturi
    PROCEEDINGS OF THE 2021 FIFTH INTERNATIONAL CONFERENCE ON I-SMAC (IOT IN SOCIAL, MOBILE, ANALYTICS AND CLOUD) (I-SMAC 2021), 2021, : 757 - 763
  • [48] Factor Augmented Sparse Throughput Deep ReLU Neural Networks for High Dimensional Regression
    Fan, Jianqing
    Gu, Yihong
    JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2024, 119 (548) : 2680 - 2694
  • [49] A BOOTSTRAP LASSO plus PARTIAL RIDGE METHOD TO CONSTRUCT CONFIDENCE INTERVALS FOR PARAMETERS IN HIGH-DIMENSIONAL SPARSE LINEAR MODELS
    Liu, Hanzhong
    Xu, Xin
    Li, Jingyi Jessica
    STATISTICA SINICA, 2020, 30 (03) : 1333 - 1355
  • [50] Sparse kernel deep stacking networks
    Welchowski, Thomas
    Schmid, Matthias
    COMPUTATIONAL STATISTICS, 2019, 34 (03) : 993 - 1014