Reinforcement Learning-based control using Q-learning and gravitational search algorithm with experimental validation on a nonlinear servo system

被引:131
|
作者
Zamfirache, Iuliu Alexandru [1 ]
Precup, Radu-Emil [1 ]
Roman, Raul-Cristian [1 ]
Petriu, Emil M. [2 ]
机构
[1] Politehn Univ Timisoara, Dept Automat & Appl Informat, Bd V Parvan 2, Timisoara 300223, Romania
[2] Univ Ottawa, Sch Elect Engn & Comp Sci, 800 King Edward, Ottawa, ON K1N 6N5, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
Gravitational search algorithm; NN training; Optimal reference tracking control; Q-learning; Reinforcement learning; Servo systems; PARTICLE SWARM OPTIMIZATION; FUZZY-LOGIC; STABILITY; DYNAMICS; DESIGN;
D O I
10.1016/j.ins.2021.10.070
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper presents a novel Reinforcement Learning (RL)-based control approach that uses a combination of a Deep Q-Learning (DQL) algorithm and a metaheuristic Gravitational Search Algorithm (GSA). The GSA is employed to initialize the weights and the biases of the Neural Network (NN) involved in DQL in order to avoid the instability, which is the main drawback of the traditional randomly initialized NNs. The quality of a particular set of weights and biases is measured at each iteration of the GSA-based initialization using a fitness function aiming to achieve the predefined optimal control or learning objective. The data generated during the RL process is used in training a NN-based controller that will be able to autonomously achieve the optimal reference tracking control objective. The proposed approach is compared with other similar techniques which use different algorithms in the initialization step, namely the traditional random algorithm, the Grey Wolf Optimizer algorithm, and the Particle Swarm Optimization algorithm. The NN-based controllers based on each of these techniques are compared using performance indices specific to optimal control as settling time, rise time, peak time, overshoot, and minimum cost function value. Real-time experiments are conducted in order to validate and test the proposed new approach in the framework of the optimal reference tracking control of a nonlinear position servo system. The experimental results show the superiority of this approach versus the other three competing approaches. (c) 2021 Elsevier Inc. All rights reserved.
引用
收藏
页码:99 / 120
页数:22
相关论文
共 50 条
  • [41] Neural Q-Learning Based on Residual Gradient for Nonlinear Control Systems
    Si, Yanna
    Pu, Jiexin
    Zang, Shaofei
    ICCAIS 2019: THE 8TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND INFORMATION SCIENCES, 2019,
  • [42] An Online Home Energy Management System using Q-Learning and Deep Q-Learning
    Izmitligil, Hasan
    Karamancioglu, Abdurrahman
    SUSTAINABLE COMPUTING-INFORMATICS & SYSTEMS, 2024, 43
  • [43] Combining Q-learning and Deterministic Policy Gradient for Learning-based MPC
    Seel, Katrine
    Gros, Ebastien
    Gravdahl, Jan Tommy
    2023 62ND IEEE CONFERENCE ON DECISION AND CONTROL, CDC, 2023, : 610 - 617
  • [44] Optimal coordination of over current relay using opposition learning-based gravitational search algorithm
    Acharya, Debasis
    Das, Dushmanta Kumar
    JOURNAL OF SUPERCOMPUTING, 2021, 77 (09): : 10721 - 10741
  • [45] Optimal coordination of over current relay using opposition learning-based gravitational search algorithm
    Debasis Acharya
    Dushmanta Kumar Das
    The Journal of Supercomputing, 2021, 77 : 10721 - 10741
  • [46] Reinforcement learning-based architecture search for quantum machine learning
    Rapp, Frederic
    Kreplin, David A.
    Huber, Marco F.
    Roth, Marco
    MACHINE LEARNING-SCIENCE AND TECHNOLOGY, 2025, 6 (01):
  • [47] An ARM-based Q-learning algorithm
    Hsu, Yuan-Pao
    Hwang, Kao-Shing
    Lin, Hsin-Yi
    ADVANCED INTELLIGENT COMPUTING THEORIES AND APPLICATIONS: WITH ASPECTS OF CONTEMPORARY INTELLIGENT COMPUTING TECHNIQUES, 2007, 2 : 11 - +
  • [48] An online scalarization multi-objective reinforcement learning algorithm: TOPSIS Q-learning
    Mirzanejad, Mohammad
    Ebrahimi, Morteza
    Vamplew, Peter
    Veisi, Hadi
    KNOWLEDGE ENGINEERING REVIEW, 2022, 37 (04):
  • [49] Path Following Control for Unmanned Surface Vehicles: A Reinforcement Learning-Based Method With Experimental Validation
    Wang, Yuanda
    Cao, Jingyu
    Sun, Jia
    Zou, Xuesong
    Sun, Changyin
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 12 (18237-18250) : 1 - 14
  • [50] Elevator group control algorithm based on residual gradient and Q-learning
    Zong, ZL
    Wang, XG
    Tang, Z
    Zeng, GZ
    SICE 2004 ANNUAL CONFERENCE, VOLS 1-3, 2004, : 329 - 331