Online Planning for Large Markov Decision Processes with Hierarchical Decomposition

被引:25
|
作者
Bai, Aijun [1 ]
Wu, Feng [1 ]
Chen, Xiaoping [1 ]
机构
[1] Univ Sci & Technol China, Sch Comp Sci & Technol, Hefei 230026, Anhui, Peoples R China
基金
新加坡国家研究基金会; 中国国家自然科学基金;
关键词
Algorithms; Experimentation; MDP; online planning; MAXQ-OP; RoboCup; ROBOCUP SOCCER; REINFORCEMENT; ABSTRACTION; SEARCH;
D O I
10.1145/2717316
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Markov decision processes (MDPs) provide a rich framework for planning under uncertainty. However, exactly solving a large MDP is usually intractable due to the "curse of dimensionality"- the state space grows exponentially with the number of state variables. Online algorithms tackle this problem by avoiding computing a policy for the entire state space. On the other hand, since online algorithm has to find a near-optimal action online in almost real time, the computation time is often very limited. In the context of reinforcement learning, MAXQ is a value function decomposition method that exploits the underlying structure of the original MDP and decomposes it into a combination of smaller subproblems arranged over a task hierarchy. In this article, we present MAXQ-OP-a novel online planning algorithm for large MDPs that utilizes MAXQ hierarchical decomposition in online settings. Compared to traditional online planning algorithms, MAXQ-OP is able to reach much more deeper states in the search tree with relatively less computation time by exploiting MAXQ hierarchical decomposition online. We empirically evaluate our algorithm in the standard Taxi domain-a common benchmark for MDPs-to show the effectiveness of our approach. We have also conducted a long-term case study in a highly complex simulated soccer domain and developed a team named WrightEagle that has won five world champions and five runners-up in the recent 10 years of RoboCup Soccer Simulation 2D annual competitions. The results in the RoboCup domain confirm the scalability of MAXQ-OP to very large domains.
引用
收藏
页数:28
相关论文
共 50 条
  • [21] A Sparse Sampling Algorithm for Near-Optimal Planning in Large Markov Decision Processes
    Michael Kearns
    Yishay Mansour
    Andrew Y. Ng
    Machine Learning, 2002, 49 : 193 - 208
  • [22] A sparse sampling algorithm for near-optimal planning in large Markov decision processes
    Kearns, M
    Mansour, Y
    Ng, AY
    MACHINE LEARNING, 2002, 49 (2-3) : 193 - 208
  • [23] A sparse sampling algorithm for near-optimal planning in large Markov decision processes
    Kearns, M
    Mansour, Y
    Ng, AY
    IJCAI-99: PROCEEDINGS OF THE SIXTEENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOLS 1 & 2, 1999, : 1324 - 1331
  • [24] Graph partitioning techniques for Markov Decision Processes decomposition
    Sabbadin, R
    ECAI 2002: 15TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, PROCEEDINGS, 2002, 77 : 670 - 674
  • [25] Exact Decomposition Approaches for Markov Decision Processes: A Survey
    Daoui, Cherki
    Abbad, Mohamed
    Tkiouat, Mohamed
    ADVANCES IN OPERATIONS RESEARCH, 2010, 2010
  • [26] Online Learning of Safety function for Markov Decision Processes
    Mazumdar, Abhijit
    Wisniewski, Rafal
    Bujorianu, Manuela L.
    2023 EUROPEAN CONTROL CONFERENCE, ECC, 2023,
  • [27] Online Convex Optimization in Adversarial Markov Decision Processes
    Rosenberg, Aviv
    Mansour, Yishay
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [28] Online Markov Decision Processes Under Bandit Feedback
    Neu, Gergely
    Gyoergy, Andras
    Szepesvari, Csaba
    Antos, Andras
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2014, 59 (03) : 676 - 691
  • [29] Online Learning in Markov Decision Processes with Continuous Actions
    Hong, Yi-Te
    Lu, Chi-Jen
    ALGORITHMIC LEARNING THEORY, ALT 2015, 2015, 9355 : 302 - 316
  • [30] Multiagent, Multitarget Path Planning in Markov Decision Processes
    Nawaz, Farhad
    Ornik, Melkior
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2023, 68 (12) : 7560 - 7574