An interruptible algorithm for perfect sampling via Markov chains

被引:1
|
作者
Fill, JA [1 ]
机构
[1] Johns Hopkins Univ, Dept Math Sci, Baltimore, MD 21218 USA
来源
ANNALS OF APPLIED PROBABILITY | 1998年 / 8卷 / 01期
关键词
Markov chain Monte Carlo; perfect simulation; rejection sampling; monotone chain; attractive spin system; Ising model; Gibbs sampler; separation; strong stationary time; duality; partially ordered set;
D O I
暂无
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
For a large class of examples arising in statistical physics known as attractive spin systems (e.g., the Ising model), one seeks to sample from a probability distribution pi on an enormously large state space, but elementary sampling is ruled out by the infeasibility of calculating an appropriate normalizing constant. The same difficulty arises in computer science problems where one seeks to sample randomly from a large finite distributive lattice whose precise size cannot be ascertained in any reasonable amount of time. The Markov chain Monte Carlo (MCMC) approximate sampling approach to such a problem is to construct and run "for a long time" a Markov chain with long-run distribution pi. But determining how long is long enough to get a good approximation can be both analytically and empirically difficult. Recently, Propp and Wilson have devised an ingenious and efficient algorithm to use the same Markov chains to produce perfect (i.e., exact) samples from ir. However, the running time of their algorithm is an unbounded random variable whose order of magnitude is typically unknown a priori and which is not independent of the state sampled, so a naive user with limited patience who aborts a long run of the algorithm will introduce bias. We present a new algorithm which (1) again uses the same Markov chains to produce perfect samples from pi, but is based on a different idea (namely, acceptance/rejection sampling); and (2) eliminates user-impatience bias. Like the Propp-Wilson algorithm, the new algorithm applies to a general class of suitably monotone chains, and also (with modification) to "anti-monotone" chains. When the chain is reversible, naive implementation of the algorithm uses fewer transitions but more space than Propp-Wilson. When fine-tuned and applied with the aid of a typical pseudorandom number generator to an attractive spin system on n sites using a random site updating Gibbs sampler whose mixing time tau is polynomial in n, the algorithm runs in time of the same order (bound) as Propp-Wilson [expectation O(tau log n)] and uses only logarithmically more space [expectation O(n log n), vs. O(n) for Propp-Wilson].
引用
收藏
页码:131 / 162
页数:32
相关论文
共 50 条
  • [11] Importance Sampling of Interval Markov Chains
    Jegourel, Cyrille
    Wang, Jingyi
    Sun, Jun
    2018 48TH ANNUAL IEEE/IFIP INTERNATIONAL CONFERENCE ON DEPENDABLE SYSTEMS AND NETWORKS (DSN), 2018, : 303 - 313
  • [12] IMPORTANCE SAMPLING FOR INDICATOR MARKOV CHAINS
    Giesecke, Kay
    Shkolnik, Alexander D.
    PROCEEDINGS OF THE 2010 WINTER SIMULATION CONFERENCE, 2010, : 2742 - 2750
  • [13] Perfect simulation for a class of positive recurrent Markov chains
    Connor, Stephen B.
    Kendall, Wilfrid S.
    ANNALS OF APPLIED PROBABILITY, 2007, 17 (03): : 781 - 808
  • [14] ANALYSING ACCEPTANCE SAMPLING PLANS BY MARKOV CHAINS
    Mirabi, Mohammad
    Fallahnezhad, Mohammad Saber
    SOUTH AFRICAN JOURNAL OF INDUSTRIAL ENGINEERING, 2012, 23 (01): : 151 - 161
  • [15] Honest Importance Sampling With Multiple Markov Chains
    Tan, Aixin
    Doss, Hani
    Hobert, James P.
    JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS, 2015, 24 (03) : 792 - 826
  • [16] Adaptive importance sampling on discrete Markov chains
    Kollman, C
    Baggerly, K
    Cox, D
    Picard, R
    ANNALS OF APPLIED PROBABILITY, 1999, 9 (02): : 391 - 412
  • [17] Parallel hierarchical sampling: A general-purpose interacting Markov chains Monte Carlo algorithm
    Rigat, F.
    Mira, A.
    COMPUTATIONAL STATISTICS & DATA ANALYSIS, 2012, 56 (06) : 1450 - 1467
  • [18] Probabilistic XML via Markov Chains
    Benedikt, Michael
    Kharlamov, Evgeny
    Olteanu, Dan
    Senellart, Pierre
    PROCEEDINGS OF THE VLDB ENDOWMENT, 2010, 3 (01): : 770 - 781
  • [19] Exponential convergence of adaptive importance sampling for Markov chains
    Baggerly, K
    Cox, D
    Picard, R
    JOURNAL OF APPLIED PROBABILITY, 2000, 37 (02) : 342 - 358
  • [20] Monotonicity Requirements for Efficient Exact Sampling with Markov Chains
    Lorek, P.
    Markowski, P.
    MARKOV PROCESSES AND RELATED FIELDS, 2017, 23 (03) : 485 - 514