ASYMPTOTIC BIAS OF INEXACT MARKOV CHAIN MONTE CARLO METHODS IN HIGH DIMENSION

被引:2
作者
Durmus, Alain [1 ]
Eberle, Andreas [2 ]
机构
[1] Ecole Polytech, Inst Polytech Paris, Ctr Math Appl CMAP, CNRS, Palaiseau, France
[2] Univ Bonn, Inst Angew Math, Bonn, Germany
关键词
Coupling; convergence to equilibrium; Markov chain Monte Carlo; Hamiltonian Monte Carlo; hybrid Monte Carlo; CONTRACTION RATES; LANGEVIN; CONVERGENCE; APPROXIMATION; GUARANTEES; ALGORITHMS; COUPLINGS; EQUATIONS; ERROR;
D O I
10.1214/23-AAP2034
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
Inexact Markov chain Monte Carlo methods rely on Markov chains that do not exactly preserve the target distribution. Examples include the unadjusted Langevin algorithm (ULA) and unadjusted Hamiltonian Monte Carlo (uHMC). This paper establishes bounds on Wasserstein distances between the invariant probability measures of inexact MCMC methods and their target distributions with a focus on understanding the precise dependence of this asymptotic bias on both dimension and discretization step size. Assuming Wasserstein bounds on the convergence to equilibrium of either the exact or the approximate dynamics, we show that for both ULA and uHMC, the asymptotic bias depends on key quantities related to the target distribution or the stationary probability measure of the scheme. As a corollary, we conclude that for models with a limited amount of interactions such as mean-field models, finite range graphical models, and perturbations thereof, the asymptotic bias has a similar dependence on the step size and the dimension as for product measures.
引用
收藏
页码:3435 / 3468
页数:34
相关论文
共 69 条
  • [1] Abadi M., 2015, TENSORFLOW LARGE SCA
  • [2] HIGH ORDER NUMERICAL APPROXIMATION OF THE INVARIANT MEASURE OF ERGODIC SDES
    Abdulle, Assyr
    Vilmart, Gilles
    Zygalakis, Konstantinos C.
    [J]. SIAM JOURNAL ON NUMERICAL ANALYSIS, 2014, 52 (04) : 1600 - 1622
  • [3] The law of the Euler scheme for stochastic differential equations .1. Convergence rate of the distribution function
    Bally, V
    Talay, D
    [J]. PROBABILITY THEORY AND RELATED FIELDS, 1996, 104 (01) : 43 - 60
  • [4] Barber D, 2012, Bayesian Reasoning and Machine Learning, DOI DOI 10.1017/CBO9780511804779
  • [5] Optimal tuning of the hybrid Monte Carlo algorithm
    Beskos, Alexandros
    Pillai, Natesh
    Roberts, Gareth
    Sanz-Serna, Jesus-Maria
    Stuart, Andrew
    [J]. BERNOULLI, 2013, 19 (5A) : 1501 - 1534
  • [6] Convergence of unadjusted Hamiltonian Monte Carlo for mean-field models
    Bou-Rabee, Nawaf
    Schuh, Katharina
    [J]. ELECTRONIC JOURNAL OF PROBABILITY, 2023, 28
  • [7] Mixing time guarantees for unadjusted Hamiltonian Monte Carlo
    Bou-rabee, Nawaf
    Eberle, Andreas
    [J]. BERNOULLI, 2023, 29 (01) : 75 - 104
  • [8] COUPLING AND CONVERGENCE FOR HAMILTONIAN MONTE CARLO
    Bou-Rabee, Nawaf
    Eberle, Andreas
    Zimmer, Raphael
    [J]. ANNALS OF APPLIED PROBABILITY, 2020, 30 (03) : 1209 - 1250
  • [9] Geometric integrators and the Hamiltonian Monte Carlo method
    Bou-Rabee, Nawaf
    Sanz-Serna, J. M.
    [J]. ACTA NUMERICA, 2018, 27 : 113 - 206
  • [10] The tamed unadjusted Langevin algorithm
    Brosse, Nicolas
    Durmus, Alain
    Moulines, Eric
    Sabanis, Sotirios
    [J]. STOCHASTIC PROCESSES AND THEIR APPLICATIONS, 2019, 129 (10) : 3638 - 3663