Neuronized Priors for Bayesian Sparse Linear Regression

被引:8
|
作者
Shin, Minsuk [1 ]
Liu, Jun S. [2 ]
机构
[1] Univ South Carolina, Dept Stat, Columbia, SC 29208 USA
[2] Harvard Univ, Dept Stat, Cambridge, MA 02138 USA
关键词
Bayesian shrinkage; Scalable Bayesian computation; Spike-and-slab prior; Variable selection; VARIABLE-SELECTION; GIBBS SAMPLER; MONTE-CARLO; POSTERIOR CONCENTRATION; GEOMETRIC ERGODICITY; HORSESHOE ESTIMATOR; MODEL SELECTION; CONVERGENCE; OPTIMIZATION; CONSISTENT;
D O I
10.1080/01621459.2021.1876710
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
Although Bayesian variable selection methods have been intensively studied, their routine use in practice has not caught up with their non-Bayesian counterparts such as Lasso, likely due to difficulties in both computations and flexibilities of prior choices. To ease these challenges, we propose the neuronized priors to unify and extend some popular shrinkage priors, such as Laplace, Cauchy, horseshoe, and spike-and-slab priors. A neuronized prior can be written as the product of a Gaussian weight variable and a scale variable transformed from Gaussian via an activation function. Compared with classic spike-and-slab priors, the neuronized priors achieve the same explicit variable selection without employing any latent indicator variables, which results in both more efficient and flexible posterior sampling and more effective posterior modal estimation. Theoretically, we provide specific conditions on the neuronized formulation to achieve the optimal posterior contraction rate, and show that a broadly applicable MCMC algorithm achieves an exponentially fast convergence rate under the neuronized formulation. We also examine various simulated and real data examples and demonstrate that using the neuronization representation is computationally more or comparably efficient than its standard counterpart in all well-known cases. An R package NPrior is provided for using neuronized priors in Bayesian linear regression.
引用
收藏
页码:1695 / 1710
页数:16
相关论文
共 50 条
  • [41] Variational Bayes for high-dimensional linear regression with sparse priors (Jan, 10.1080/01621459.2020.1847121, 2021)
    Ray, Kolyan
    Szabo, Botond
    JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2021, 116 (535) : 1560 - 1560
  • [42] Incremental sparse Bayesian ordinal regression
    Li, Chang
    de Rijke, Maarten
    NEURAL NETWORKS, 2018, 106 : 294 - 302
  • [43] Sparse Online Variational Bayesian Regression
    Law, Kody J. H.
    Zankin, Vitaly
    SIAM-ASA JOURNAL ON UNCERTAINTY QUANTIFICATION, 2022, 10 (03): : 1070 - 1100
  • [44] SPARSE BAYESIAN REGULARIZATION USING BERNOULLI-LAPLACIAN PRIORS
    Chaari, Lotfi
    Tourneret, Jean-Yves
    Batatia, Hadj
    2013 PROCEEDINGS OF THE 21ST EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2013,
  • [45] EP-GIG Priors and Applications in Bayesian Sparse Learning
    Zhang, Zhihua
    Wang, Shusen
    Liu, Dehua
    Jordan, Michael I.
    JOURNAL OF MACHINE LEARNING RESEARCH, 2012, 13 : 2031 - 2061
  • [46] On the correspondence from Bayesian log-linear modelling to logistic regression modelling with g-priors
    Michail Papathomas
    TEST, 2018, 27 : 197 - 220
  • [47] On the correspondence from Bayesian log-linear modelling to logistic regression modelling with g-priors
    Papathomas, Michail
    TEST, 2018, 27 (01) : 197 - 220
  • [48] Sparse PCA from Sparse Linear Regression
    Bresler, Guy
    Park, Sung Min
    Persu, Madalina
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [49] Sparse Bayesian learning with automatic-weighting Laplace priors for sparse signal recovery
    Zonglong Bai
    Jinwei Sun
    Computational Statistics, 2023, 38 : 2053 - 2074
  • [50] Sparse Bayesian learning with automatic-weighting Laplace priors for sparse signal recovery
    Bai, Zonglong
    Sun, Jinwei
    COMPUTATIONAL STATISTICS, 2023, 38 (04) : 2053 - 2074