Neuronized Priors for Bayesian Sparse Linear Regression

被引:8
|
作者
Shin, Minsuk [1 ]
Liu, Jun S. [2 ]
机构
[1] Univ South Carolina, Dept Stat, Columbia, SC 29208 USA
[2] Harvard Univ, Dept Stat, Cambridge, MA 02138 USA
关键词
Bayesian shrinkage; Scalable Bayesian computation; Spike-and-slab prior; Variable selection; VARIABLE-SELECTION; GIBBS SAMPLER; MONTE-CARLO; POSTERIOR CONCENTRATION; GEOMETRIC ERGODICITY; HORSESHOE ESTIMATOR; MODEL SELECTION; CONVERGENCE; OPTIMIZATION; CONSISTENT;
D O I
10.1080/01621459.2021.1876710
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
Although Bayesian variable selection methods have been intensively studied, their routine use in practice has not caught up with their non-Bayesian counterparts such as Lasso, likely due to difficulties in both computations and flexibilities of prior choices. To ease these challenges, we propose the neuronized priors to unify and extend some popular shrinkage priors, such as Laplace, Cauchy, horseshoe, and spike-and-slab priors. A neuronized prior can be written as the product of a Gaussian weight variable and a scale variable transformed from Gaussian via an activation function. Compared with classic spike-and-slab priors, the neuronized priors achieve the same explicit variable selection without employing any latent indicator variables, which results in both more efficient and flexible posterior sampling and more effective posterior modal estimation. Theoretically, we provide specific conditions on the neuronized formulation to achieve the optimal posterior contraction rate, and show that a broadly applicable MCMC algorithm achieves an exponentially fast convergence rate under the neuronized formulation. We also examine various simulated and real data examples and demonstrate that using the neuronization representation is computationally more or comparably efficient than its standard counterpart in all well-known cases. An R package NPrior is provided for using neuronized priors in Bayesian linear regression.
引用
收藏
页码:1695 / 1710
页数:16
相关论文
共 50 条
  • [1] BAYESIAN LINEAR REGRESSION WITH SPARSE PRIORS
    Castillo, Ismael
    Schmidt-Hieber, Johannes
    Van der Vaart, Aad
    ANNALS OF STATISTICS, 2015, 43 (05): : 1986 - 2018
  • [2] Sparse Bayesian linear regression using generalized normal priors
    Zhang, Hai
    Wang, Puyu
    Dong, Qing
    Wang, Pu
    INTERNATIONAL JOURNAL OF WAVELETS MULTIRESOLUTION AND INFORMATION PROCESSING, 2017, 15 (03)
  • [3] SPARSE LINEAR REGRESSION WITH BETA PROCESS PRIORS
    Chen, Bo
    Paisley, John
    Carin, Lawrence
    2010 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2010, : 1234 - 1237
  • [4] On spike-and-slab priors for Bayesian equation discovery of nonlinear dynamical systems via sparse linear regression
    Nayek, R.
    Fuentes, R.
    Worden, K.
    Cross, E. J.
    MECHANICAL SYSTEMS AND SIGNAL PROCESSING, 2021, 161
  • [5] Variational Bayes for High-Dimensional Linear Regression With Sparse Priors
    Ray, Kolyan
    Szabo, Botond
    JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2022, 117 (539) : 1270 - 1281
  • [6] Bayesian Sparse Multivariate Regression with Asymmetric Nonlocal Priors for Microbiome Data Analysis
    Shuler, Kurtis
    Sison-Mangus, Marilou
    Lee, Juhee
    BAYESIAN ANALYSIS, 2020, 15 (02): : 559 - 578
  • [7] Sparse linear regression with structured priors and application to denoising of musical audio
    Fevotte, Cedric
    Torresani, Bruno
    Daudet, Laurent
    Godsill, Simon J.
    IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2008, 16 (01): : 174 - 185
  • [8] Empirical priors for prediction in sparse high-dimensional linear regression
    Martin, Ryan
    Tang, Yiqi
    Journal of Machine Learning Research, 2020, 21
  • [9] Empirical Priors for Prediction in Sparse High-dimensional Linear Regression
    Martin, Ryan
    Tang, Yiqi
    JOURNAL OF MACHINE LEARNING RESEARCH, 2020, 21
  • [10] Sparse Bayesian linear regression with latent masking variables
    Kondo, Yohei
    Hayashi, Kohei
    Maeda, Shin-ichi
    NEUROCOMPUTING, 2017, 258 : 3 - 12