Honest exploration of intractable probability distributions via Markov chain Monte Carlo

被引:164
|
作者
Jones, GL
Hobert, JP
机构
[1] Univ Minnesota, Sch Stat, Minneapolis, MN 55455 USA
[2] Univ Florida, Dept Stat, Gainesville, FL 32611 USA
关键词
central limit theorem; convergence rate; coupling inequality; drift condition; general state space; geometric ergodicity; Gibbs sampler; hierarchical random effects model; Metropolis algorithm; minorization condition; regeneration; splitting; uniform ergodicity;
D O I
10.1214/ss/1015346317
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
Two important questions that must be answered whenever a Markov chain Monte Carlo (MCMC) algorithm is used are (Q1) What is an appropriate burn-in? and (Q2) How long should the sampling continue after burn-in? Developing rigorous answers to these questions presently requires a detailed study of the convergence properties of the underlying Markov chain. Consequently, in most practical applications of MCMC, exact answers to (Q1) and (Q2) are not sought. The goal of this paper is to demystify the analysis that leads to honest answers to (Q1) and (Q2). The authors hope that this article will serve as a bridge between those developing Markov chain theory and practitioners using MCMC to solve practical problems. The ability to address (Q1) and (Q2) formally comes from establishing a drift condition and an associated minorization condition, which together imply that the underlying Markov chain is geometrically ergodic. In this article, we explain exactly what drift and minorization are as well as how and why these conditions can be used to form rigorous answers to (Q1) and (Q2). The basic ideas are as follows. The results of Rosenthal (1995) and Roberts and Tweedie (1999) allow one to use drift and minorization conditions to construct a formula giving an analytic upper bound on the distance to stationarity, A rigorous answer to (Q1) can be calculated using this formula. The desired characteristics of the target distribution are typically estimated using ergodic averages. Geometric ergodicity of the underlying Markov chain implies that there are central limit theorems available for ergodic averages (Chan and Geyer 1994). The regenerative simulation technique (Mykland, Tierney and Yu, 1995; Robert, 1995) can be used to get a consistent estimate of the variance of the asymptotic normal distribution. Hence, an asymptotic standard error can be calculated, which provides an answer to (Q2) in the sense that an appropriate time to stop sampling can be determined. The methods are illustrated using a Gibbs sampler for a Bayesian version of the one-way random effects model and a data set concerning styrene exposure.
引用
收藏
页码:312 / 334
页数:23
相关论文
共 50 条
  • [21] ENHANCED MIXTURE POPULATION MONTE CARLO VIA STOCHASTIC OPTIMIZATION AND MARKOV CHAIN MONTE CARLO SAMPLING
    El-Laham, Yousef
    Djuric, Petar M.
    Bugallo, Monica F.
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 5475 - 5479
  • [22] Population Markov Chain Monte Carlo
    Laskey, KB
    Myers, JW
    MACHINE LEARNING, 2003, 50 (1-2) : 175 - 196
  • [23] Monte Carlo integration with Markov chain
    Tan, Zhiqiang
    JOURNAL OF STATISTICAL PLANNING AND INFERENCE, 2008, 138 (07) : 1967 - 1980
  • [24] Population Markov Chain Monte Carlo
    Kathryn Blackmond Laskey
    James W. Myers
    Machine Learning, 2003, 50 : 175 - 196
  • [25] On nonlinear Markov chain Monte Carlo
    Andrieu, Christophe
    Jasra, Ajay
    Doucet, Arnaud
    Del Moral, Pierre
    BERNOULLI, 2011, 17 (03) : 987 - 1014
  • [26] Structured Markov Chain Monte Carlo
    Sargent, DJ
    Hodges, JS
    Carlin, BP
    DIMENSION REDUCTION, COMPUTATIONAL COMPLEXITY AND INFORMATION, 1998, 30 : 191 - 191
  • [27] Evolutionary Markov chain Monte Carlo
    Drugan, MM
    Thierens, D
    ARTIFICIAL EVOLUTION, 2004, 2936 : 63 - 76
  • [28] Markov Chain Monte Carlo in Practice
    Jones, Galin L.
    Qin, Qian
    ANNUAL REVIEW OF STATISTICS AND ITS APPLICATION, 2022, 9 : 557 - 578
  • [29] Structured Markov chain Monte Carlo
    Sargent, DJ
    Hodges, JS
    Carlin, BP
    JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS, 2000, 9 (02) : 217 - 234
  • [30] Coreset Markov chain Monte Carlo
    Chen, Naitong
    Campbell, Trevor
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238