An interruptible algorithm for perfect sampling via Markov chains

被引:1
作者
Fill, JA [1 ]
机构
[1] Johns Hopkins Univ, Dept Math Sci, Baltimore, MD 21218 USA
关键词
Markov chain Monte Carlo; perfect simulation; rejection sampling; monotone chain; attractive spin system; Ising model; Gibbs sampler; separation; strong stationary time; duality; partially ordered set;
D O I
暂无
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
For a large class of examples arising in statistical physics known as attractive spin systems (e.g., the Ising model), one seeks to sample from a probability distribution pi on an enormously large state space, but elementary sampling is ruled out by the infeasibility of calculating an appropriate normalizing constant. The same difficulty arises in computer science problems where one seeks to sample randomly from a large finite distributive lattice whose precise size cannot be ascertained in any reasonable amount of time. The Markov chain Monte Carlo (MCMC) approximate sampling approach to such a problem is to construct and run "for a long time" a Markov chain with long-run distribution pi. But determining how long is long enough to get a good approximation can be both analytically and empirically difficult. Recently, Propp and Wilson have devised an ingenious and efficient algorithm to use the same Markov chains to produce perfect (i.e., exact) samples from ir. However, the running time of their algorithm is an unbounded random variable whose order of magnitude is typically unknown a priori and which is not independent of the state sampled, so a naive user with limited patience who aborts a long run of the algorithm will introduce bias. We present a new algorithm which (1) again uses the same Markov chains to produce perfect samples from pi, but is based on a different idea (namely, acceptance/rejection sampling); and (2) eliminates user-impatience bias. Like the Propp-Wilson algorithm, the new algorithm applies to a general class of suitably monotone chains, and also (with modification) to "anti-monotone" chains. When the chain is reversible, naive implementation of the algorithm uses fewer transitions but more space than Propp-Wilson. When fine-tuned and applied with the aid of a typical pseudorandom number generator to an attractive spin system on n sites using a random site updating Gibbs sampler whose mixing time tau is polynomial in n, the algorithm runs in time of the same order (bound) as Propp-Wilson [expectation O(tau log n)] and uses only logarithmically more space [expectation O(n log n), vs. O(n) for Propp-Wilson].
引用
收藏
页码:131 / 162
页数:32
相关论文
共 50 条
  • [11] Honest Importance Sampling With Multiple Markov Chains
    Tan, Aixin
    Doss, Hani
    Hobert, James P.
    JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS, 2015, 24 (03) : 792 - 826
  • [12] Functional compatibility, Markov chains and Gibbs sampling with improper posteriors
    Hobert, J
    Casella, G
    DIMENSION REDUCTION, COMPUTATIONAL COMPLEXITY AND INFORMATION, 1998, 30 : 29 - 29
  • [13] A Perfect Sampling Algorithm of Random Walks with Forbidden Arcs
    Durand, Stephane
    Gaujal, Bruno
    Perronnin, Florence
    Vincent, Jean-Marc
    QUANTITATIVE EVALUATION OF SYSTEMS, QEST 2014, 2014, 8657 : 178 - 193
  • [14] ESTIMATING STANDARD ERRORS FOR IMPORTANCE SAMPLING ESTIMATORS WITH MULTIPLE MARKOV CHAINS
    Roy, Vivekananda
    Tan, Aixin
    Flegal, James M.
    STATISTICA SINICA, 2018, 28 (02) : 1079 - 1101
  • [15] A Markov-chain sampling algorithm for GARCH models
    Nakatsuma, T
    STUDIES IN NONLINEAR DYNAMICS AND ECONOMETRICS, 1998, 3 (02) : 107 - 117
  • [16] Practical perfect sampling using composite bounding chains: the Dirichlet-multinomial model
    Stein, Nathan M.
    Meng, Xiao-Li
    BIOMETRIKA, 2013, 100 (04) : 817 - 830
  • [17] PROBABILISTIC FAILURE ANALYSIS OF STRUCTURES USING IMPORTANCE SAMPLING-BASED MARKOV CHAINS
    Fan Xue-ping
    Lu Da-gang
    INNOVATION & SUSTAINABILITY OF STRUCTURES, VOLS 1 AND 2, 2011, : 1466 - 1466
  • [18] The Behavior of Metropolis-Coupled Markov Chains When Sampling Rugged Phylogenetic Distributions
    Brown, Jeremy M.
    Thomson, Robert C.
    SYSTEMATIC BIOLOGY, 2018, 67 (04) : 729 - 734
  • [19] A new algorithm for setting initial values for Markov Chain Monte Carlo in genetic linkage analysis via Gibbs sampling
    Jandaghi, Gholamreza
    SCIENTIFIC RESEARCH AND ESSAYS, 2010, 5 (22): : 3447 - 3454
  • [20] Analysis of non-reversible Markov chains via similarity orbits
    Choi, Michael C. H.
    Patie, Pierre
    COMBINATORICS PROBABILITY & COMPUTING, 2020, 29 (04) : 508 - 536