Coupling-based convergence assessment of some Gibbs samplers for high-dimensional Bayesian regression with shrinkage priors

被引:5
|
作者
Biswas, Niloy [1 ]
Bhattacharya, Anirban [2 ]
Jacob, Pierre E. [3 ]
Johndrow, James E. [4 ]
机构
[1] Harvard Univ, Cambridge, MA 02138 USA
[2] Texas A&M Univ, College Stn, TX USA
[3] ESSEC Business Sch, Cergy Pontoise, France
[4] Univ Penn, Wharton Sch, Philadelphia, PA 19104 USA
基金
美国国家科学基金会;
关键词
Bayesian inference; couplings; Gibbs sampling; Horseshoe prior; parallel computation; CHAIN MONTE-CARLO; GEOMETRIC ERGODICITY; HORSESHOE ESTIMATOR; PRIOR DISTRIBUTIONS; VARIABLE SELECTION; REGULARIZATION; COMPLEXITY; RATES;
D O I
10.1111/rssb.12495
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
We consider Markov chain Monte Carlo (MCMC) algorithms for Bayesian high-dimensional regression with continuous shrinkage priors. A common challenge with these algorithms is the choice of the number of iterations to perform. This is critical when each iteration is expensive, as is the case when dealing with modern data sets, such as genome-wide association studies with thousands of rows and up to hundreds of thousands of columns. We develop coupling techniques tailored to the setting of high-dimensional regression with shrinkage priors, which enable practical, non-asymptotic diagnostics of convergence without relying on traceplots or long-run asymptotics. By establishing geometric drift and minorization conditions for the algorithm under consideration, we prove that the proposed couplings have finite expected meeting time. Focusing on a class of shrinkage priors which includes the 'Horseshoe', we empirically demonstrate the scalability of the proposed couplings. A highlight of our findings is that less than 1000 iterations can be enough for a Gibbs sampler to reach stationarity in a regression on 100,000 covariates. The numerical results also illustrate the impact of the prior on the computational efficiency of the coupling, and suggest the use of priors where the local precisions are Half-t distributed with degree of freedom larger than one.
引用
收藏
页码:973 / 996
页数:24
相关论文
共 15 条
  • [1] Convergence properties of Gibbs samplers for Bayesian probit regression with proper priors
    Chakraborty, Saptarshi
    Khare, Kshitij
    ELECTRONIC JOURNAL OF STATISTICS, 2017, 11 (01): : 177 - 210
  • [2] Fast Markov Chain Monte Carlo for High-Dimensional Bayesian Regression Models With Shrinkage Priors
    Jin, Rui
    Tan, Aixin
    JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS, 2021, 30 (03) : 632 - 646
  • [3] Nearly optimal Bayesian shrinkage for high-dimensional regression
    Qifan Song
    Faming Liang
    Science China(Mathematics), 2023, 66 (02) : 409 - 442
  • [4] Nearly optimal Bayesian shrinkage for high-dimensional regression
    Song, Qifan
    Liang, Faming
    SCIENCE CHINA-MATHEMATICS, 2023, 66 (02) : 409 - 442
  • [5] Nearly optimal Bayesian shrinkage for high-dimensional regression
    Qifan Song
    Faming Liang
    Science China Mathematics, 2023, 66 : 409 - 442
  • [6] Bayesian inference for high-dimensional linear regression under mnet priors
    Tan, Aixin
    Huang, Jian
    CANADIAN JOURNAL OF STATISTICS-REVUE CANADIENNE DE STATISTIQUE, 2016, 44 (02): : 180 - 197
  • [7] High-Dimensional Bayesian Network Classification with Network Global-Local Shrinkage Priors
    Guha, Sharmistha
    Rodriguez, Abel
    BAYESIAN ANALYSIS, 2023, 18 (04): : 1131 - 1160
  • [8] Sparse Bayesian variable selection in high-dimensional logistic regression models with correlated priors
    Ma, Zhuanzhuan
    Han, Zifei
    Ghosh, Souparno
    Wu, Liucang
    Wang, Min
    STATISTICAL ANALYSIS AND DATA MINING, 2024, 17 (01)
  • [9] Bayesian stein-type shrinkage estimators in high-dimensional linear regression models
    Zanboori, Ahmadreza
    Zanboori, Ehsan
    Mousavi, Maryam
    Mirjalili, Sayyed Mahmoud
    SAO PAULO JOURNAL OF MATHEMATICAL SCIENCES, 2024, 18 (02): : 1889 - 1914
  • [10] Fully Bayesian logistic regression with hyper-LASSO priors for high-dimensional feature selection
    Li, Longhai
    Yao, Weixin
    JOURNAL OF STATISTICAL COMPUTATION AND SIMULATION, 2018, 88 (14) : 2827 - 2851