An interruptible algorithm for perfect sampling via Markov chains

被引:1
作者
Fill, JA [1 ]
机构
[1] Johns Hopkins Univ, Dept Math Sci, Baltimore, MD 21218 USA
关键词
Markov chain Monte Carlo; perfect simulation; rejection sampling; monotone chain; attractive spin system; Ising model; Gibbs sampler; separation; strong stationary time; duality; partially ordered set;
D O I
暂无
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
For a large class of examples arising in statistical physics known as attractive spin systems (e.g., the Ising model), one seeks to sample from a probability distribution pi on an enormously large state space, but elementary sampling is ruled out by the infeasibility of calculating an appropriate normalizing constant. The same difficulty arises in computer science problems where one seeks to sample randomly from a large finite distributive lattice whose precise size cannot be ascertained in any reasonable amount of time. The Markov chain Monte Carlo (MCMC) approximate sampling approach to such a problem is to construct and run "for a long time" a Markov chain with long-run distribution pi. But determining how long is long enough to get a good approximation can be both analytically and empirically difficult. Recently, Propp and Wilson have devised an ingenious and efficient algorithm to use the same Markov chains to produce perfect (i.e., exact) samples from ir. However, the running time of their algorithm is an unbounded random variable whose order of magnitude is typically unknown a priori and which is not independent of the state sampled, so a naive user with limited patience who aborts a long run of the algorithm will introduce bias. We present a new algorithm which (1) again uses the same Markov chains to produce perfect samples from pi, but is based on a different idea (namely, acceptance/rejection sampling); and (2) eliminates user-impatience bias. Like the Propp-Wilson algorithm, the new algorithm applies to a general class of suitably monotone chains, and also (with modification) to "anti-monotone" chains. When the chain is reversible, naive implementation of the algorithm uses fewer transitions but more space than Propp-Wilson. When fine-tuned and applied with the aid of a typical pseudorandom number generator to an attractive spin system on n sites using a random site updating Gibbs sampler whose mixing time tau is polynomial in n, the algorithm runs in time of the same order (bound) as Propp-Wilson [expectation O(tau log n)] and uses only logarithmically more space [expectation O(n log n), vs. O(n) for Propp-Wilson].
引用
收藏
页码:131 / 162
页数:32
相关论文
共 50 条
  • [31] Perfect sampling for queues and network models
    Murdoch, Duncan J.
    Takahara, Glen
    ACM TRANSACTIONS ON MODELING AND COMPUTER SIMULATION, 2006, 16 (01): : 76 - 92
  • [32] Perfect sampling of Jackson queueing networks
    Ana Bušić
    Stéphane Durand
    Bruno Gaujal
    Florence Perronnin
    Queueing Systems, 2015, 80 : 223 - 260
  • [33] Perfect sampling of Jackson queueing networks
    Busic, Ana
    Durand, Stephane
    Gaujal, Bruno
    Perronnin, Florence
    QUEUEING SYSTEMS, 2015, 80 (03) : 223 - 260
  • [34] Perfect sampling for the wavelet reconstruction of signals
    Holmes, C
    Denison, DGT
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2002, 50 (02) : 337 - 344
  • [35] Concentration of Markov chains indexed by trees
    Shriver, Christopher
    ANNALES DE L INSTITUT HENRI POINCARE-PROBABILITES ET STATISTIQUES, 2022, 58 (03): : 1701 - 1711
  • [36] Adiabatic Times for Markov Chains and Applications
    Bradford, Kyle
    Kovchegov, Yevgeniy
    JOURNAL OF STATISTICAL PHYSICS, 2011, 143 (05) : 955 - 969
  • [37] Adiabatic Times for Markov Chains and Applications
    Kyle Bradford
    Yevgeniy Kovchegov
    Journal of Statistical Physics, 2011, 143
  • [38] COMPARISON OF MARKOV CHAINS VIA WEAK POINCAR? INEQUALITIES WITH APPLICATION TO PSEUDO-MARGINAL MCMC
    Andrieu, Christophe
    Lee, Anthony
    Power, Sam
    Wang, Andi Q.
    ANNALS OF STATISTICS, 2022, 50 (06) : 3592 - 3618
  • [39] Sampling Strategies for Fast Updating of Gaussian Markov Random Fields
    Brown, D. Andrew
    McMahan, Christopher S.
    Self, Stella Watson
    AMERICAN STATISTICIAN, 2021, 75 (01) : 52 - 65
  • [40] A simple introduction to Markov Chain Monte-Carlo sampling
    van Ravenzwaaij, Don
    Cassey, Pete
    Brown, Scott D.
    PSYCHONOMIC BULLETIN & REVIEW, 2018, 25 (01) : 143 - 154