Importance sampling type estimators based on approximate marginal Markov chain Monte Carlo

被引:13
作者
Vihola, Matti [1 ]
Helske, Jouni [1 ,2 ]
Franks, Jordan [1 ,3 ]
机构
[1] Univ Jyvaskyla, Dept Math & Stat, POB 35, FI-40014 Jyvaskyla, Finland
[2] Linkoping Univ, Dept Sci & Technol, Linkoping, Sweden
[3] Newcastle Univ, Sch Math Stat & Phys, Newcastle Upon Tyne, Tyne & Wear, England
基金
芬兰科学院;
关键词
Delayed acceptance; importance sampling; Markov chain Monte Carlo; sequential Monte Carlo; pseudo-marginal method; unbiased estimator; CENTRAL LIMIT-THEOREMS; GEOMETRIC ERGODICITY; ADDITIVE-FUNCTIONALS; BAYESIAN COMPUTATION; UNIFORM ERGODICITY; STATE; GIBBS; SIMULATION; CONVERGENCE; HASTINGS;
D O I
10.1111/sjos.12492
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
We consider importance sampling (IS) type weighted estimators based on Markov chain Monte Carlo (MCMC) targeting an approximate marginal of the target distribution. In the context of Bayesian latent variable models, the MCMC typically operates on the hyperparameters, and the subsequent weighting may be based on IS or sequential Monte Carlo (SMC), but allows for multilevel techniques as well. The IS approach provides a natural alternative to delayed acceptance (DA) pseudo-marginal/particle MCMC, and has many advantages over DA, including a straightforward parallelization and additional flexibility in MCMC implementation. We detail minimal conditions which ensure strong consistency of the suggested estimators, and provide central limit theorems with expressions for asymptotic variances. We demonstrate how our method can make use of SMC in the state space models context, using Laplace approximations and time-discretized diffusions. Our experimental results are promising and show that the IS-type approach can provide substantial gains relative to an analogous DA scheme, and is often competitive even without parallelization.
引用
收藏
页码:1339 / 1376
页数:38
相关论文
共 50 条
[31]   Markov Chain Monte Carlo posterior sampling with the Hamiltonian method [J].
Hanson, KM .
MEDICAL IMAGING: 2001: IMAGE PROCESSING, PTS 1-3, 2001, 4322 :456-467
[32]   Markov chain Monte Carlo on optimal adaptive sampling selections [J].
Chao, CT .
ENVIRONMENTAL AND ECOLOGICAL STATISTICS, 2003, 10 (01) :129-151
[33]   Markov chain Monte Carlo sampling using a reservoir method [J].
Wang, Zhonglei .
COMPUTATIONAL STATISTICS & DATA ANALYSIS, 2019, 139 :64-74
[34]   ON THE OPTIMAL TRANSITION MATRIX FOR MARKOV CHAIN MONTE CARLO SAMPLING [J].
Chen, Ting-Li ;
Chen, Wei-Kuo ;
Hwang, Chii-Ruey ;
Pai, Hui-Ming .
SIAM JOURNAL ON CONTROL AND OPTIMIZATION, 2012, 50 (05) :2743-2762
[35]   A simple introduction to Markov Chain Monte-Carlo sampling [J].
van Ravenzwaaij, Don ;
Cassey, Pete ;
Brown, Scott D. .
PSYCHONOMIC BULLETIN & REVIEW, 2018, 25 (01) :143-154
[36]   Accelerating Markov Chain Monte Carlo sampling with diffusion models ☆ [J].
Hunt-Smith, N. T. ;
Melnitchouk, W. ;
Ringer, F. ;
Sato, N. ;
Thomas, A. W. ;
White, M. J. .
COMPUTER PHYSICS COMMUNICATIONS, 2024, 296
[37]   Adaptive Markov chain Monte Carlo sampling and estimation in Mata [J].
Baker, Matthew J. .
STATA JOURNAL, 2014, 14 (03) :623-661
[38]   Markov Chain Monte Carlo on optimal adaptive sampling selections [J].
Chang-Tai Chao .
Environmental and Ecological Statistics, 2003, 10 :129-151
[39]   Marginal maximum a posteriori estimation using Markov chain Monte Carlo [J].
Arnaud Doucet ;
Simon J. Godsill ;
Christian P. Robert .
Statistics and Computing, 2002, 12 :77-84
[40]   Marginal maximum a posteriori estimation using Markov chain Monte Carlo [J].
Doucet, A ;
Godsill, SJ ;
Robert, CP .
STATISTICS AND COMPUTING, 2002, 12 (01) :77-84