Robust H8 tracking of linear discrete-time systems using Q-learning

被引:2
|
作者
Valadbeigi, Amir Parviz [1 ,3 ]
Shu, Zhan [1 ]
Khaki Sedigh, Ali [2 ]
机构
[1] Univ Alberta, Dept Elect & Comp Engn, Edmonton, AB, Canada
[2] K N Toosi Univ Technol, Dept Elect Engn, Tehran, Iran
[3] Univ Alberta, Dept Elect & Comp Engn, Edmonton, AB T6G 1H9, Canada
关键词
auxiliary system; discounted factor; Q-learning; robust H-infinity tracking; H-INFINITY-CONTROL; ZERO-SUM GAMES; FEEDBACK-CONTROL; STABILIZATION; SYNCHRONIZATION;
D O I
10.1002/rnc.6662
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper deals with a robust H-infinity tracking problem with a discounted factor. A new auxiliary system is established in terms of norm-bounded time-varying uncertainties. It is shown that the robust discounted H-infinity tracking problem for the auxiliary system solves the original problem. Then, the new robust discounted H-infinity tracking problem is represented as a well-known zero-sum game problem. Moreover, the robust tracking Bellman equation and the robust tracking Algebraic Riccati equation (RTARE) are inferred. A lower bound of a discounted factor for stability is obtained to assure the stability of the closed-loop system. Based on the auxiliary system, the system is reshaped in a new structure that is applicable to Reinforcement Learning methods. Finally, an online Q-learning algorithm without the knowledge of system matrices is proposed to solve the algebraic Riccati equation associated with the robust discounted H-infinity tracking problem for the auxiliary system. Simulation results are given to verify the effectiveness and merits of the proposed method.
引用
收藏
页码:5604 / 5623
页数:20
相关论文
共 50 条
  • [1] Reinforcement Q-Learning Algorithm for H∞ Tracking Control of Unknown Discrete-Time Linear Systems
    Peng, Yunjian
    Chen, Qian
    Sun, Weijie
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2020, 50 (11): : 4109 - 4122
  • [2] H∞ Tracking Control for Linear Discrete-Time Systems: Model-Free Q-Learning Designs
    Yang, Yunjie
    Wan, Yan
    Zhu, Jihong
    Lewis, Frank L.
    IEEE CONTROL SYSTEMS LETTERS, 2021, 5 (01): : 175 - 180
  • [3] Minimax Q-learning design for H∞ control of linear discrete-time systems
    Li, Xinxing
    Xi, Lele
    Zha, Wenzhong
    Peng, Zhihong
    FRONTIERS OF INFORMATION TECHNOLOGY & ELECTRONIC ENGINEERING, 2022, 23 (03) : 438 - 451
  • [4] Model-Free Q-Learning for the Tracking Problem of Linear Discrete-Time Systems
    Li, Chun
    Ding, Jinliang
    Lewis, Frank L.
    Chai, Tianyou
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (03) : 3191 - 3201
  • [5] Optimal trajectory tracking for uncertain linear discrete-time systems using time-varying Q-learning
    Geiger, Maxwell
    Narayanan, Vignesh
    Jagannathan, Sarangapani
    INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, 2024, 38 (07) : 2340 - 2368
  • [6] Improved Q-Learning Method for Linear Discrete-Time Systems
    Chen, Jian
    Wang, Jinhua
    Huang, Jie
    PROCESSES, 2020, 8 (03)
  • [7] Reinforcement Q-learning for optimal tracking control of linear discrete-time systems with unknown dynamics
    Kiumarsi, Bahare
    Lewis, Frank L.
    Modares, Hamidreza
    Karimpour, Ali
    Naghibi-Sistani, Mohammad-Bagher
    AUTOMATICA, 2014, 50 (04) : 1167 - 1175
  • [8] Reinforcement Q-learning algorithm for H∞ tracking control of discrete-time Markov jump systems
    Shi, Jiahui
    He, Dakuo
    Zhang, Qiang
    INTERNATIONAL JOURNAL OF SYSTEMS SCIENCE, 2025, 56 (03) : 502 - 523
  • [9] Fuzzy H8 Control of Discrete-Time Nonlinear Markov Jump Systems via a Novel Hybrid Reinforcement Q-Learning Method
    Wang, Jing
    Wu, Jiacheng
    Shen, Hao
    Cao, Jinde
    Rutkowski, Leszek
    IEEE TRANSACTIONS ON CYBERNETICS, 2023, 53 (11) : 7380 - 7391
  • [10] An Optimal Tracking Control Method with Q-learning for Discrete-time Linear Switched System
    Zhao, Shangwei
    Wang, Jingcheng
    Wang, Hongyuan
    Xu, Haotian
    PROCEEDINGS OF THE 39TH CHINESE CONTROL CONFERENCE, 2020, : 1414 - 1419