H∞ control with constrained input for completely unknown nonlinear systems using data-driven reinforcement learning method

被引:40
|
作者
Jiang, He [1 ]
Zhang, Huaguang [1 ]
Luo, Yanhong [1 ]
Cui, Xiaohong [1 ]
机构
[1] Northeastern Univ, Coll Informat Sci & Engn, Box 134, Shenyang 110819, Peoples R China
基金
中国国家自然科学基金;
关键词
Reinforcement learning; Adaptive dynamic programming; Data-driven; Neural networks; OPTIMAL TRACKING CONTROL; DYNAMIC-PROGRAMMING ALGORITHM; DIFFERENTIAL GRAPHICAL GAMES; POLICY UPDATE ALGORITHM; ZERO-SUM GAME; FEEDBACK-CONTROL; CONTROL DESIGN; TIME-SYSTEMS; ITERATION; SYNCHRONIZATION;
D O I
10.1016/j.neucom.2016.11.041
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper investigates the H-infinity control problem for nonlinear systems with completely unknown dynamics and constrained control input by utilizing a novel data-driven reinforcement learning method. It is known that nonlinear H-infinity control problem relies on the solution of Hamilton-Jacobi-Isaacs (HJI) equation, which is essentially a nonlinear partial differential equation and generally impossible to be solved analytically. In order to overcome this difficulty, firstly, we propose a model-based simultaneoui policy update algorithm to learn the solution of HJI equation iteratively and provide its convergence proof. Then, based on this model-based method, we develop a data-driven model-free algorithm, which only requires the real system sampling data generated by arbitrary different control inputs and external disturbances instead of accurate system models, and prove that these two algorithms are equivalent. To implement this model-free algorithm, three neural networks (NNs) are employed to approximate the iterative performance index function, control policy and disturbance policy, respectively, and the least-square approach is used to minimize the NN approximation residual errors. Finally, the proposed scheme is tested on the rotational/translational actuator nonlinear system.
引用
收藏
页码:226 / 234
页数:9
相关论文
共 50 条
  • [1] Robust control scheme for a class of uncertain nonlinear systems with completely unknown dynamics using data-driven reinforcement learning method
    Jiang, He
    Zhang, Huaguang
    Cui, Yang
    Xiao, Geyang
    NEUROCOMPUTING, 2018, 273 : 68 - 77
  • [2] Online adaptive data-driven control for unknown nonlinear systems with constrained-input
    Xi'an University of Architecture and Technology, Xi'an, China
    Int. Conf. Cyber-Energy Syst. Intell. Energy, ICCSIE, 1600,
  • [3] Data-Driven MPC for Nonlinear Systems with Reinforcement Learning
    Li, Yiran
    Wang, Qian
    Sun, Zhongqi
    Xia, Yuanqing
    2022 41ST CHINESE CONTROL CONFERENCE (CCC), 2022, : 2404 - 2409
  • [4] Data-Driven Adaptive Optimal Tracking Control for Completely Unknown Systems
    Hou, Dawei
    Na, Jing
    Gao, Guanbin
    Li, Guang
    PROCEEDINGS OF 2018 IEEE 7TH DATA DRIVEN CONTROL AND LEARNING SYSTEMS CONFERENCE (DDCLS), 2018, : 1039 - 1044
  • [5] Reinforcement learning-based optimal control of unknown constrained-input nonlinear systems using simulated experience
    Asl, Hamed Jabbari
    Uchibe, Eiji
    NONLINEAR DYNAMICS, 2023, 111 (17) : 16093 - 16110
  • [6] A Data-driven Iterative Learning Control for I/O Constrained Nonlinear Systems
    Chi, Ronghu
    Liu, Xiaohe
    Lin, Na
    Zhang, Ruikun
    2016 14TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION (ICARCV), 2016,
  • [7] Optimal tracking control of nonlinear partially-unknown constrained-input systems using integral reinforcement learning
    Modares, Hamidreza
    Lewis, Frank L.
    AUTOMATICA, 2014, 50 (07) : 1780 - 1792
  • [8] Reinforcement learning-based optimal control of unknown constrained-input nonlinear systems using simulated experience
    Hamed Jabbari Asl
    Eiji Uchibe
    Nonlinear Dynamics, 2023, 111 : 16093 - 16110
  • [9] Data-Driven Reinforcement Learning Control for Quadrotor Systems
    Dang, Ngoc Trung
    Dao, Phuong Nam
    INTERNATIONAL JOURNAL OF MECHANICAL ENGINEERING AND ROBOTICS RESEARCH, 2024, 13 (05): : 495 - 501
  • [10] Optimal Output Feedback Control of Nonlinear Partially-Unknown Constrained-Input Systems Using Integral Reinforcement Learning
    Ren, Ling
    Zhang, Guoshan
    Mu, Chaoxu
    NEURAL PROCESSING LETTERS, 2019, 50 (03) : 2963 - 2989