Automatic Zig-Zag sampling in practice

被引:0
作者
Alice Corbella
Simon E. F. Spencer
Gareth O. Roberts
机构
[1] University of Warwick,Department of Statistics
[2] University of Cambridge,MRC Biostatistics Unit
[3] The Alan Turing Institute,undefined
来源
Statistics and Computing | 2022年 / 32卷
关键词
Automatic inference; Piecewise deterministic Markov processes; Zig-Zag sampler; Gradient-based MCMC; Super-efficiency;
D O I
暂无
中图分类号
学科分类号
摘要
Novel Monte Carlo methods to generate samples from a target distribution, such as a posterior from a Bayesian analysis, have rapidly expanded in the past decade. Algorithms based on Piecewise Deterministic Markov Processes (PDMPs), non-reversible continuous-time processes, are developing into their own research branch, thanks their important properties (e.g., super-efficiency). Nevertheless, practice has not caught up with the theory in this field, and the use of PDMPs to solve applied problems is not widespread. This might be due, firstly, to several implementational challenges that PDMP-based samplers present with and, secondly, to the lack of papers that showcase the methods and implementations in applied settings. Here, we address both these issues using one of the most promising PDMPs, the Zig-Zag sampler, as an archetypal example. After an explanation of the key elements of the Zig-Zag sampler, its implementation challenges are exposed and addressed. Specifically, the formulation of an algorithm that draws samples from a target distribution of interest is provided. Notably, the only requirement of the algorithm is a closed-form differentiable function to evaluate the log-target density of interest, and, unlike previous implementations, no further information on the target is needed. The performance of the algorithm is evaluated against canonical Hamiltonian Monte Carlo, and it is proven to be competitive, in simulation and real-data settings. Lastly, we demonstrate that the super-efficiency property, i.e. the ability to draw one independent sample at a lesser cost than evaluating the likelihood of all the data, can be obtained in practice.
引用
收藏
相关论文
共 45 条
[1]  
Andrieu C(2021)Peskun-tierney ordering for Markovian Monte Carlo: Beyond the reversible scenario Ann. Stat. 49 1958-1981
[2]  
Livingstone S(2018)Automatic differentiation in machine learning a survey J. Mach. Learn. Res. 18 1-43
[3]  
Baydin AG(2019)The Zig-Zag process and super-efficient sampling for Bayesian analysis of big data Ann. Stat. 47 1288-1320
[4]  
Pearlmutter BA(2021)A piecewise deterministic Monte Carlo method for diffusion bridges Stat. Comput. 31 1-21
[5]  
Radul AA(2019)Ergodicity of the Zig-Zag process Ann. Appl. Probab. 29 2266-2301
[6]  
Siskind JM(2018)Piecewise deterministic Markov processes for scalable Monte Carlo on restricted domains Stat. Probab. Lett. 136 148-154
[7]  
Bierkens J(2018)The bouncy particle sampler: A nonreversible rejection-free Markov chain Monte Carlo method J. Am. Stat. Assoc. 113 855-867
[8]  
Fearnhead P(2017)Stan: A probabilistic programming language J. Stat. Software 76 1-32
[9]  
Roberts G(2018)Piecewise deterministic Markov processes for continuous-time Monte Carlo Stat. Sci. 33 386-412
[10]  
Bierkens J(1998)Optimal scaling of discrete approximations to Langevin diffusions J. Royal Stat. Soc. Ser. B (Statistical Methodology) 60 255-268