Distributionally Robust Model-based Reinforcement Learning with Large State Spaces

被引:0
|
作者
Ramesh, Shyam Sundhar [1 ]
Sessa, Pier Giuseppe [2 ]
Hu, Yifan [3 ]
Krause, Andreas [2 ]
Bogunovic, Ilija [1 ]
机构
[1] UCL, London, England
[2] Swiss Fed Inst Technol, Zurich, Switzerland
[3] Ecole Polytech Fed Lausanne, Lausanne, Switzerland
基金
英国工程与自然科学研究理事会;
关键词
MARKOV DECISION-PROCESSES;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Three major challenges in reinforcement learning are the complex dynamical systems with large state spaces, the costly data acquisition processes, and the deviation of real-world dynamics from the training environment deployment. To overcome these issues, we study distributionally robust Markov decision processes with continuous state spaces under the widely used Kullback-Leibler, chi-square, and total variation uncertainty sets. We propose a model-based approach that utilizes Gaussian Processes and the maximum variance reduction algorithm to efficiently learn multi-output nominal transition dynamics, leveraging access to a generative model (i.e., simulator). We further demonstrate the statistical sample complexity of the proposed method for different uncertainty sets. These complexity bounds are independent of the number of states and extend beyond linear dynamics, ensuring the effectiveness of our approach in identifying near-optimal distributionally-robust policies. The proposed method can be further combined with other model-free distributionally robust reinforcement learning methods to obtain a near-optimal robust policy. Experimental results demonstrate the robustness of our algorithm to distributional shifts and its superior performance in terms of the number of samples needed.
引用
收藏
页数:42
相关论文
共 50 条
  • [41] A Contraction Approach to Model-based Reinforcement Learning
    Fan, Ting-Han
    Ramadge, Peter J.
    24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS), 2021, 130 : 325 - +
  • [42] Model-Based Reinforcement Learning For Robot Control
    Li, Xiang
    Shang, Weiwei
    Cong, Shuang
    2020 5TH INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS AND MECHATRONICS (ICARM 2020), 2020, : 300 - 305
  • [43] Consistency of Fuzzy Model-Based Reinforcement Learning
    Busoniu, Lucian
    Ernst, Damien
    De Schutter, Bart
    Babuska, Robert
    2008 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS, VOLS 1-5, 2008, : 518 - +
  • [44] Abstraction Selection in Model-Based Reinforcement Learning
    Jiang, Nan
    Kulesza, Alex
    Singh, Satinder
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 37, 2015, 37 : 179 - 188
  • [45] Asynchronous Methods for Model-Based Reinforcement Learning
    Zhang, Yunzhi
    Clavera, Ignasi
    Tsai, Boren
    Abbeel, Pieter
    CONFERENCE ON ROBOT LEARNING, VOL 100, 2019, 100
  • [46] Online Constrained Model-based Reinforcement Learning
    van Niekerk, Benjamin
    Damianou, Andreas
    Rosman, Benjamin
    CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE (UAI2017), 2017,
  • [47] Calibrated Model-Based Deep Reinforcement Learning
    Malik, Ali
    Kuleshov, Volodymyr
    Song, Jiaming
    Nemer, Danny
    Seymour, Harlan
    Ermon, Stefano
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [48] Skill-based Model-based Reinforcement Learning
    Shi, Lucy Xiaoyang
    Lim, Joseph J.
    Lee, Youngwoon
    CONFERENCE ON ROBOT LEARNING, VOL 205, 2022, 205 : 2262 - 2272
  • [49] Model-based exploration in continuous state spaces
    Jong, Nicholas K.
    Stone, Peter
    ABSTRACTION, REFORMULATION, AND APPROXIMATION, PROCEEDINGS, 2007, 4612 : 258 - +
  • [50] Model gradient: unified model and policy learning in model-based reinforcement learning
    Chengxing Jia
    Fuxiang Zhang
    Tian Xu
    Jing-Cheng Pang
    Zongzhang Zhang
    Yang Yu
    Frontiers of Computer Science, 2024, 18