A sequential direct hybrid algorithm to compute stationary distribution of continuous-time Markov chain

被引:0
|
作者
Brazenas, Mindaugas [1 ,2 ]
Valakevicius, Eimutis [1 ]
机构
[1] Kaunas Univ Technol, Dept Math Modelling, Kaunas, Lithuania
[2] Studentu g 50-432, LT-51368 Kaunas, Lithuania
关键词
Stationarydistribution; Largecontinuous-timeMarkovchain; Hybridalgorithm; GTHmethod; Gaussianeliminationmethod; SPARSE; ELIMINATION; SOLVER;
D O I
10.1016/j.eswa.2022.117962
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In modeling a wide variety of problems of the real world using Markov processes, we usually get large sparse Markov chains. In the case of non-Markovian process, the behavior of a stochastic phenomenon or system can be approximated by continuous-time Markov chains (CTMC) using Phase-type distributions. However, this substantially increases the number of states, which requires a large amount of computation time and operational memory resources. Even the most recent computer hardware is not enough to provide a solution in almost real time, unless a specific algorithm is designed.In this paper, a new hybrid algorithm for the computation of a stationary distribution of a large ergodic homogeneous Markov chains with a finite state space and continuous time is suggested. This algorithm would solve the aforementioned challenges. Depending on model size and sparsity, it significantly reduces a computational time when compared with other recent methods, such as the SparseLU solver from Eigen library (version 3.9.9) and the MUMPS solver (MUltifrontal Massively Parallel sparse direct Solver, version 5.4.0). It is important to note that the algorithm works well independently on the structure of the generator matrix of a continuous-time Markov chain. For the study of the developed hybrid algorithm, the generator matrix is constructed randomly using a special algorithm, based on combining two methods: the slightly modified Grassmann, Taskar and Heyman (GTH) algorithm and the Gauss elimination method. The new idea is to remove the majority of states in a near-optimal order by the GTH method and complete computations for the remaining states by the Gauss method. The robust approach to identify a switching position from one algorithm to another one has been proposed and tested.
引用
收藏
页数:8
相关论文
共 50 条
  • [41] Simple examples of continuous-time Markov-chain models for reactions
    MacDonald, Iain L.
    REACTION KINETICS MECHANISMS AND CATALYSIS, 2023, 136 (01) : 1 - 11
  • [42] A continuous-time Markov chain model of fibrosis progression in NAFLD and NASH
    Meyer, Lyndsey F.
    Musante, Cynthia J.
    Allen, Richard
    FRONTIERS IN MEDICINE, 2023, 10
  • [43] Predicting DNA kinetics with a truncated continuous-time Markov chain method
    Zolaktaf, Sedigheh
    Dannenberg, Frits
    Schmidt, Mark
    Condon, Anne
    Winfree, Erik
    COMPUTATIONAL BIOLOGY AND CHEMISTRY, 2023, 104
  • [44] Simple examples of continuous-time Markov-chain models for reactions
    Iain L. MacDonald
    Reaction Kinetics, Mechanisms and Catalysis, 2023, 136 : 1 - 11
  • [45] Analysis of Fast and Secure Protocol Based on Continuous-Time Markov Chain
    Zhou Conghua
    Cao Meiling
    CHINA COMMUNICATIONS, 2013, 10 (08) : 137 - 149
  • [46] Taboo rate and hitting time distribution of continuous-time reversible Markov chains
    Xiang, Xuyan
    Fu, Haiqin
    Zhou, Jieming
    Deng, Yingchun
    Yang, Xiangqun
    STATISTICS & PROBABILITY LETTERS, 2021, 169
  • [47] Robust Linear Filtering for Continuous-Time Hybrid Markov Linear Systems
    Costa, O. L. V.
    Fragoso, M. D.
    47TH IEEE CONFERENCE ON DECISION AND CONTROL, 2008 (CDC 2008), 2008, : 5098 - 5103
  • [48] A new definition of hitting time and an embedded Markov chain in continuous-time quantum walks
    Ruiz-Ortiz, Miguel A.
    Martin-Gonzalez, Ehyter M.
    Santiago-Alarcon, Diego
    Venegas-Andraca, Salvador E.
    QUANTUM INFORMATION PROCESSING, 2023, 22 (05)
  • [49] Discussion on: "On the Filtering Problem for Continuous-Time Markov Jump Linear Systems with no Observation of the Markov Chain"
    Shi, Peng
    Liu, Ming
    EUROPEAN JOURNAL OF CONTROL, 2011, 17 (04) : 355 - 356
  • [50] Estimating the stationary distribution of a Markov chain
    Athreya, KB
    Majumdar, M
    ECONOMIC THEORY, 2003, 21 (2-3) : 729 - 742