Reinforcement Learning Based Algorithms for Average Cost Markov Decision Processes

被引:0
|
作者
Mohammed Shahid Abdulla
Shalabh Bhatnagar
机构
[1] Indian Institute of Science,Department of Computer Science and Automation
来源
关键词
Actor-critic algorithms; Two timescale stochastic approximation; Markov decision processes; Policy iteration; Simultaneous perturbation stochastic approximation; Normalized Hadamard matrices; Reinforcement learning; TD-learning;
D O I
暂无
中图分类号
学科分类号
摘要
This article proposes several two-timescale simulation-based actor-critic algorithms for solution of infinite horizon Markov Decision Processes with finite state-space under the average cost criterion. Two of the algorithms are for the compact (non-discrete) action setting while the rest are for finite-action spaces. On the slower timescale, all the algorithms perform a gradient search over corresponding policy spaces using two different Simultaneous Perturbation Stochastic Approximation (SPSA) gradient estimates. On the faster timescale, the differential cost function corresponding to a given stationary policy is updated and an additional averaging is performed for enhanced performance. A proof of convergence to a locally optimal policy is presented. Next, we discuss a memory efficient implementation that uses a feature-based representation of the state-space and performs TD(0) learning along the faster timescale. The TD(0) algorithm does not follow an on-line sampling of states but is observed to do well on our setting. Numerical experiments on a problem of rate based flow control are presented using the proposed algorithms. We consider here the model of a single bottleneck node in the continuous time queueing framework. We show performance comparisons of our algorithms with the two-timescale actor-critic algorithms of Konda and Borkar (1999) and Bhatnagar and Kumar (2004). Our algorithms exhibit more than an order of magnitude better performance over those of Konda and Borkar (1999).
引用
收藏
页码:23 / 52
页数:29
相关论文
共 50 条
  • [31] Optimal policies for constrained average-cost Markov decision processes
    Gonzalez-Hernandez, Juan
    Villarreal, Cesar E.
    TOP, 2011, 19 (01) : 107 - 120
  • [32] Optimal policies for constrained average-cost Markov decision processes
    Juan González-Hernández
    César E. Villarreal
    TOP, 2011, 19 : 107 - 120
  • [33] From Perturbation Analysis to Markov Decision Processes and Reinforcement Learning
    Xi-Ren Cao
    Discrete Event Dynamic Systems, 2003, 13 : 9 - 39
  • [34] REINFORCEMENT LEARNING OF NON-MARKOV DECISION-PROCESSES
    WHITEHEAD, SD
    LIN, LJ
    ARTIFICIAL INTELLIGENCE, 1995, 73 (1-2) : 271 - 306
  • [35] Reinforcement learning algorithm for partially observable Markov decision processes
    Wang, Xue-Ning
    He, Han-Gen
    Xu, Xin
    Kongzhi yu Juece/Control and Decision, 2004, 19 (11): : 1263 - 1266
  • [36] From perturbation analysis to Markov decision processes and reinforcement learning
    Cao, XR
    DISCRETE EVENT DYNAMIC SYSTEMS-THEORY AND APPLICATIONS, 2003, 13 (1-2): : 9 - 39
  • [37] Learning algorithms for finite horizon constrained markov decision processes
    Mittal, A.
    Hemachandra, N.
    JOURNAL OF INDUSTRIAL AND MANAGEMENT OPTIMIZATION, 2007, 3 (03) : 429 - 444
  • [38] Toward an Optimized Value Iteration Algorithm for Average Cost Markov Decision Processes
    Arruda, Edilson F.
    Ourique, Fabricio
    Almudevar, Anthony
    49TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2010, : 930 - 934
  • [40] Exact finite approximations of average-cost countable Markov decision processes
    Leizarowitz, Arie
    Shwartz, Adam
    AUTOMATICA, 2008, 44 (06) : 1480 - 1487