Dynamic importance sampling for uniformly recurrent Markov chains

被引:34
|
作者
Dupuis, P [1 ]
Wang, H [1 ]
机构
[1] Brown Univ, Div Appl Math, Providence, RI 02912 USA
来源
ANNALS OF APPLIED PROBABILITY | 2005年 / 15卷 / 1A期
关键词
asymptotic optimality; importance sampling; Markov chain; Monte Carlo simulation; rare events; stochastic game; weak convergence;
D O I
10.1214/105051604000001016
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
Importance sampling is a variance reduction technique for efficient estimation of rare-event probabilities by Monte Carlo. In standard importance sampling schemes, the system is simulated using an a priori fixed change of measure suggested by a large deviation lower bound analysis. Recent work, however, has suggested that such schemes do not work well in many situations. In this paper we consider dynamic importance sampling in the setting of uniformly recurrent Markov chains. By "dynamic" we mean that in the course of a single simulation, the change of measure can depend on the outcome of the simulation up till that time. Based on a control-theoretic approach to large deviations, the existence of asymptotically optimal dynamic schemes is demonstrated in great generality. The implementation of the dynamic schemes is carried out with the help of a limiting Bellman equation. Numerical examples are presented to contrast the dynamic and standard schemes.
引用
收藏
页码:1 / 38
页数:38
相关论文
共 50 条
  • [1] Importance Sampling of Interval Markov Chains
    Jegourel, Cyrille
    Wang, Jingyi
    Sun, Jun
    2018 48TH ANNUAL IEEE/IFIP INTERNATIONAL CONFERENCE ON DEPENDABLE SYSTEMS AND NETWORKS (DSN), 2018, : 303 - 313
  • [2] IMPORTANCE SAMPLING FOR INDICATOR MARKOV CHAINS
    Giesecke, Kay
    Shkolnik, Alexander D.
    PROCEEDINGS OF THE 2010 WINTER SIMULATION CONFERENCE, 2010, : 2742 - 2750
  • [3] Honest Importance Sampling With Multiple Markov Chains
    Tan, Aixin
    Doss, Hani
    Hobert, James P.
    JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS, 2015, 24 (03) : 792 - 826
  • [4] Adaptive importance sampling on discrete Markov chains
    Kollman, C
    Baggerly, K
    Cox, D
    Picard, R
    ANNALS OF APPLIED PROBABILITY, 1999, 9 (02): : 391 - 412
  • [5] Exponential convergence of adaptive importance sampling for Markov chains
    Baggerly, K
    Cox, D
    Picard, R
    JOURNAL OF APPLIED PROBABILITY, 2000, 37 (02) : 342 - 358
  • [6] ON THE CHOICE OF ALTERNATIVE MEASURES IN IMPORTANCE SAMPLING WITH MARKOV-CHAINS
    ANDRADOTTIR, S
    HEYMAN, DP
    OTT, TJ
    OPERATIONS RESEARCH, 1995, 43 (03) : 509 - 519
  • [7] Potentially unlimited variance reduction in importance sampling of Markov chains
    Andradottir, S
    Heyman, DP
    Ott, TJ
    ADVANCES IN APPLIED PROBABILITY, 1996, 28 (01) : 166 - 188
  • [8] On optimal importance sampling for discrete-time Markov chains
    Sandmann, W
    SECOND INTERNATIONAL CONFERENCE ON THE QUANTITATIVE EVALUATION OF SYSTEMS, PROCEEDINGS, 2005, : 230 - 239
  • [9] On sampling with Markov chains
    Chung, FRK
    Graham, RL
    Yau, ST
    RANDOM STRUCTURES & ALGORITHMS, 1996, 9 (1-2) : 55 - 77
  • [10] Counting Walks and Graph Homomorphisms via Markov Chains and Importance Sampling
    Levin, David A.
    Peres, Yuval
    AMERICAN MATHEMATICAL MONTHLY, 2017, 124 (07): : 637 - 641