Generalization Bounds of ERM-Based Learning Processes for Continuous-Time Markov Chains

被引:14
|
作者
Zhang, Chao [1 ]
Tao, Dacheng [2 ,3 ]
机构
[1] Nanyang Technol Univ, Sch Comp Engn, Singapore 639798, Singapore
[2] Univ Technol Sydney, Ctr Quantum Computat & Intelligent Syst, Sydney, NSW 2007, Australia
[3] Univ Technol Sydney, Fac Engn & Informat Technol, Sydney, NSW 2007, Australia
基金
澳大利亚研究理事会;
关键词
Convergence; deviation inequality; empirical risk minimization; generalization bound; Markov chain; rate of convergence; statistical learning theory; CHANNEL ESTIMATION; CAPACITY;
D O I
10.1109/TNNLS.2012.2217987
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Many existing results on statistical learning theory are based on the assumption that samples are independently and identically distributed (i.i.d.). However, the assumption of i.i.d. samples is not suitable for practical application to problems in which samples are time dependent. In this paper, we are mainly concerned with the empirical risk minimization (ERM) based learning process for time-dependent samples drawn from a continuous-time Markov chain. This learning process covers many kinds of practical applications, e.g., the prediction for a time series and the estimation of channel state information. Thus, it is significant to study its theoretical properties including the generalization bound, the asymptotic convergence, and the rate of convergence. It is noteworthy that, since samples are time dependent in this learning process, the concerns of this paper cannot (at least straightforwardly) be addressed by existing methods developed under the sample i.i.d. assumption. We first develop a deviation inequality for a sequence of time-dependent samples drawn from a continuous-time Markov chain and present a symmetrization inequality for such a sequence. By using the resultant deviation inequality and symmetrization inequality, we then obtain the generalization bounds of the ERM-based learning process for time-dependent samples drawn from a continuous-time Markov chain. Finally, based on the resultant generalization bounds, we analyze the asymptotic convergence and the rate of convergence of the learning process.
引用
收藏
页码:1872 / 1883
页数:12
相关论文
共 50 条
  • [21] Perturbation analysis for continuous-time Markov chains
    LIU YuanYuan
    ScienceChina(Mathematics), 2015, 58 (12) : 2633 - 2642
  • [22] Interval Continuous-Time Markov Chains Simulation
    Galdino, Sergio
    2013 INTERNATIONAL CONFERENCE ON FUZZY THEORY AND ITS APPLICATIONS (IFUZZY 2013), 2013, : 273 - 278
  • [23] On Nonergodicity of Some Continuous-Time Markov Chains
    D. B. Andreev
    E. A. Krylov
    A. I. Zeifman
    Journal of Mathematical Sciences, 2004, 122 (4) : 3332 - 3335
  • [24] Perturbation analysis for continuous-time Markov chains
    YuanYuan Liu
    Science China Mathematics, 2015, 58 : 2633 - 2642
  • [25] Lumpability for Uncertain Continuous-Time Markov Chains
    Cardelli, Luca
    Grosu, Radu
    Larsen, Kim G.
    Tribastone, Mirco
    Tschaikowski, Max
    Vandin, Andrea
    QUANTITATIVE EVALUATION OF SYSTEMS (QEST 2021), 2021, 12846 : 391 - 409
  • [26] SIMILAR STATES IN CONTINUOUS-TIME MARKOV CHAINS
    Yap, V. B.
    JOURNAL OF APPLIED PROBABILITY, 2009, 46 (02) : 497 - 506
  • [27] Matrix Analysis for Continuous-Time Markov Chains
    Le, Hung, V
    Tsatsomeros, M. J.
    SPECIAL MATRICES, 2021, 10 (01): : 219 - 233
  • [28] Algorithmic Randomness in Continuous-Time Markov Chains
    Huang, Xiang
    Lutz, Jack H.
    Migunov, Andrei N.
    2019 57TH ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON), 2019, : 615 - 622
  • [29] Path integrals for continuous-time Markov chains
    Pollett, PK
    Stefanov, VT
    JOURNAL OF APPLIED PROBABILITY, 2002, 39 (04) : 901 - 904
  • [30] Maxentropic continuous-time homogeneous Markov chains☆
    Bolzern, Paolo
    Colaneri, Patrizio
    De Nicolao, Giuseppe
    AUTOMATICA, 2025, 175