Continuous-Time Fitted Value Iteration for Robust Policies

被引:3
|
作者
Lutter, Michael [1 ]
Belousov, Boris [1 ]
Mannor, Shie [2 ,3 ]
Fox, Dieter [4 ]
Garg, Animesh [5 ]
Peters, Jan [1 ]
机构
[1] Tech Univ Darmstadt, Comp Sci Dept, Intelligent Autonomous Syst Grp, D-64289 Darmstadt, Germany
[2] Technion Israel Inst Technol, Andrew & Erna Viterbi Fac Elect & Comp Engn, x0026, IL-3200003 Haifa, Israel
[3] NVIDIA, IL-6121002 Tel Aviv, Israel
[4] Univ Washington, Allen Sch Comp Sci & Engn, NVIDIA, Seattle, WA 98195 USA
[5] Univ Toronto, Comp Sci Dept, NVIDIA, Toronto, ON M5S 1A4, Canada
关键词
Mathematical models; Optimization; Differential equations; Robots; Heuristic algorithms; Reinforcement learning; Costs; Value Iteration; continuous control; dynamic programming adversarial reinforcement learning; REINFORCEMENT; COST;
D O I
10.1109/TPAMI.2022.3215769
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Solving the Hamilton-Jacobi-Bellman equation is important in many domains including control, robotics and economics. Especially for continuous control, solving this differential equation and its extension the Hamilton-Jacobi-Isaacs equation, is important as it yields the optimal policy that achieves the maximum reward on a give task. In the case of the Hamilton-Jacobi-Isaacs equation, which includes an adversary controlling the environment and minimizing the reward, the obtained policy is also robust to perturbations of the dynamics. In this paper we propose continuous fitted value iteration (cFVI) and robust fitted value iteration (rFVI). These algorithms leverage the non-linear control-affine dynamics and separable state and action reward of many continuous control problems to derive the optimal policy and optimal adversary in closed form. This analytic expression simplifies the differential equations and enables us to solve for the optimal value function using value iteration for continuous actions and states as well as the adversarial case. Notably, the resulting algorithms do not require discretization of states or actions. We apply the resulting algorithms to the Furuta pendulum and cartpole. We show that both algorithms obtain the optimal policy. The robustness Sim2Real experiments on the physical systems show that the policies successfully achieve the task in the real-world. When changing the masses of the pendulum, we observe that robust value iteration is more robust compared to deep reinforcement learning algorithm and the non-robust version of the algorithm. Videos of the experiments are shown at https://sites.google.com/view/rfvi.
引用
收藏
页码:5534 / 5548
页数:15
相关论文
共 50 条
  • [31] Online reinforcement learning for a class of partially unknown continuous-time nonlinear systems via value iteration
    Su, Hanguang
    Zhang, Huaguang
    Zhang, Kun
    Gao, Wenzhong
    OPTIMAL CONTROL APPLICATIONS & METHODS, 2018, 39 (02): : 1011 - 1028
  • [32] ROBUST LYAPUNOV GAMES - THE CONTINUOUS-TIME CASE
    DEISSENBERG, C
    LECTURE NOTES IN ECONOMICS AND MATHEMATICAL SYSTEMS, 1991, 353 : 65 - 83
  • [33] ROBUST CONSENSUS FOR CONTINUOUS-TIME MULTIAGENT DYNAMICS
    Shi, Guodong
    Johansson, Karl Henrik
    SIAM JOURNAL ON CONTROL AND OPTIMIZATION, 2013, 51 (05) : 3673 - 3691
  • [34] Robust stability condition for continuous-time systems
    Chughtai, SS
    Munro, N
    ELECTRONICS LETTERS, 2004, 40 (16) : 978 - 979
  • [35] Robust stability of continuous-time difference systems
    Kharitonov, VL
    INTERNATIONAL JOURNAL OF CONTROL, 1996, 64 (05) : 985 - 990
  • [36] Design of a robust NTF for continuous-time ΔΣ modulators
    Mirzaei, Mahdi
    Shamsi, Hossein
    IEICE ELECTRONICS EXPRESS, 2010, 7 (17): : 1323 - 1328
  • [37] Optimal Investment and Reinsurance Policies in a Continuous-Time Model
    Tong, Yan
    Lv, Tongling
    Yan, Yu
    MATHEMATICS, 2023, 11 (24)
  • [38] An efficient frontier for participating policies in a continuous-time economy
    Iwaki, H
    Yumae, S
    INSURANCE MATHEMATICS & ECONOMICS, 2004, 35 (03): : 611 - 625
  • [39] Adaptive optimal output regulation of unknown linear continuous-time systems by dynamic output feedback and value iteration
    Xie, Kedi
    Zheng, Yiwei
    Lan, Weiyao
    Yu, Xiao
    CONTROL ENGINEERING PRACTICE, 2023, 141
  • [40] Value iteration based integral reinforcement learning approach for H∞ controller design of continuous-time nonlinear systems
    Xiao, Geyang
    Zhang, Huaguang
    Zhang, Kun
    Wen, Yinlei
    NEUROCOMPUTING, 2018, 285 : 51 - 59