Multi-task reinforcement learning in partially observable stochastic environments

被引:0
|
作者
Li, Hui [1 ]
Liao, Xuejun [1 ]
Carin, Lawrence [1 ]
机构
[1] Department of Electrical and Computer Engineering, Duke University, Durham, NC 27708-0291, United States
关键词
Stochastic systems - Iterative methods - Parameter estimation - Learning algorithms - Markov processes;
D O I
暂无
中图分类号
学科分类号
摘要
We consider the problem of multi-task reinforcement learning (MTRL) in multiple partially observable stochastic environments. We introduce the regionalized policy representation (RPR) to characterize the agent's behavior in each environment. The RPR is a parametric model of the conditional distribution over current actions given the history of past actions and observations; the agent's choice of actions is directly based on this conditional distribution, without an intervening model to characterize the environment itself. We propose off-policy batch algorithms to learn the parameters of the RPRs, using episodic data collected when following a behavior policy, and show their linkage to policy iteration. We employ the Dirichlet process as a nonparametric prior over the RPRs across multiple environments. The intrinsic clustering property of the Dirichlet process imposes sharing of episodes among similar environments, which effectively reduces the number of episodes required for learning a good policy in each environment, when data sharing is appropriate. The number of distinct RPRs and the associated clusters (the sharing patterns) are automatically discovered by exploiting the episodic data as well as the nonparametric nature of the Dirichlet process. We demonstrate the effectiveness of the proposed RPR as well as the RPR-based MTRL framework on various problems, including grid-world navigation and multi-aspect target classification. The experimental results show that the RPR is a competitive reinforcement learning algorithm in partially observable domains, and the MTRL consistently achieves better performance than single task reinforcement learning. © 2009 Hui Li, Xuejun Liao and Lawrence Carin.
引用
收藏
页码:1131 / 1186
相关论文
共 50 条
  • [1] Multi-task Reinforcement Learning in Partially Observable Stochastic Environments
    Li, Hui
    Liao, Xuejun
    Carin, Lawrence
    JOURNAL OF MACHINE LEARNING RESEARCH, 2009, 10 : 1131 - 1186
  • [2] Learning a navigation task in changing environments by multi-task reinforcement learning
    Grossmann, A
    Poli, R
    ADVANCES IN ROBOT LEARNING, PROCEEDINGS, 2000, 1812 : 23 - 43
  • [3] Inverse Reinforcement Learning in Partially Observable Environments
    Choi, Jaedeug
    Kim, Kee-Eung
    JOURNAL OF MACHINE LEARNING RESEARCH, 2011, 12 : 691 - 730
  • [4] Inverse reinforcement learning in partially observable environments
    Choi, Jaedeug
    Kim, Kee-Eung
    Journal of Machine Learning Research, 2011, 12 : 691 - 730
  • [5] Inverse Reinforcement Learning in Partially Observable Environments
    Choi, Jaedeug
    Kim, Kee-Eung
    21ST INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI-09), PROCEEDINGS, 2009, : 1028 - 1033
  • [6] Multi-task reinforcement learning in humans
    Momchil S. Tomov
    Eric Schulz
    Samuel J. Gershman
    Nature Human Behaviour, 2021, 5 : 764 - 773
  • [7] Multi-Task Reinforcement Learning for Quadrotors
    Xing, Jiaxu
    Geles, Ismail
    Song, Yunlong
    Aljalbout, Elie
    Scaramuzza, Davide
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2025, 10 (03): : 2112 - 2119
  • [8] Multi-task reinforcement learning in humans
    Tomov, Momchil S.
    Schulz, Eric
    Gershman, Samuel J.
    NATURE HUMAN BEHAVIOUR, 2021, 5 (06) : 764 - +
  • [9] Sparse Multi-Task Reinforcement Learning
    Calandriello, Daniele
    Lazaric, Alessandro
    Restelli, Marcello
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 27 (NIPS 2014), 2014, 27
  • [10] Sparse multi-task reinforcement learning
    Calandriello, Daniele
    Lazaric, Alessandro
    Restelli, Marcello
    INTELLIGENZA ARTIFICIALE, 2015, 9 (01) : 5 - 20