Tuning path tracking controllers for autonomous cars using reinforcement learning

被引:0
|
作者
Carrasco A.V. [1 ]
Sequeira J.S. [1 ]
机构
[1] Lisbon University, Instituto Superior Técnico, Lisbon
关键词
Autonomous cars; Autonomous driving systems; Dependability; Non-smooth systems; Path tracking; Q-learning; Reinforcement learning;
D O I
10.7717/PEERJ-CS.1550
中图分类号
学科分类号
摘要
This article proposes an adaptable path tracking control system, based on reinforcement learning (RL), for autonomous cars. A four-parameter controller shapes the behaviour of the vehicle to navigate lane changes and roundabouts. The tuning of the tracker uses an 'educated' Q-Learning algorithm to minimize the lateral and steering trajectory errors, this being a key contribution of this article. The CARLA (CAR Learning to Act) simulator was used both for training and testing. The results show the vehicle is able to adapt its behaviour to the different types of reference trajectories, navigating safely with low tracking errors. The use of a robot operating system (ROS) bridge between CARLA and the tracker (i) results in a realistic system, and (ii) simplifies the replacement of CARLA by a real vehicle, as in a hardware-in-the-loop system. Another contribution of this article is the framework for the dependability of the overall architecture based on stability results of non-smooth systems, presented at the end of this article. © Copyright 2023 Vilaçca Carrasco and Silva Sequeira
引用
收藏
相关论文
共 50 条
  • [31] Tuning hydrostatic two-output drive-train controllers using reinforcement learning
    Van Vaerenbergh, Kevin
    Vrancx, Peter
    De Hauwere, Yann-Michael
    Nowe, Ann
    Hostens, Erik
    Lauwerys, Christophe
    MECHATRONICS, 2014, 24 (08) : 975 - 985
  • [32] Exploring Deep Reinforcement Learning for Autonomous Powerline Tracking
    Pienroj, Panin
    Schonborn, Sandro
    Birke, Robert
    IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (IEEE INFOCOM 2019 WKSHPS), 2019, : 496 - 501
  • [33] Three-Dimensional Path Tracking Control of Autonomous Underwater Vehicle Based on Deep Reinforcement Learning
    Sun, Yushan
    Zhang, Chenming
    Zhang, Guocheng
    Xu, Hao
    Ran, Xiangrui
    JOURNAL OF MARINE SCIENCE AND ENGINEERING, 2019, 7 (12)
  • [34] Meta-reinforcement learning for the tuning of PI controllers: An offline approach
    McClement, Daniel G.
    Lawrence, Nathan P.
    Backstroem, Johan U.
    Loewen, Philip D.
    Forbes, Michael G.
    Gopaluni, R. Bhushan
    JOURNAL OF PROCESS CONTROL, 2022, 118 : 139 - 152
  • [35] Deep reinforcement learning with shallow controllers: An experimental application to PID tuning
    Lawrence, Nathan P.
    Forbes, Michael G.
    Loewen, Philip D.
    McClement, Daniel G.
    Backstrom, Johan U.
    Gopaluni, R. Bhushan
    CONTROL ENGINEERING PRACTICE, 2022, 121
  • [36] Comparison of Path Tracking and Torque-Vectoring Controllers for Autonomous Electric Vehicles
    Chatzikomis, Christoforos
    Sorniotti, Aldo
    Gruber, Patrick
    Zanchetta, Mattia
    Willans, Dan
    Balcombe, Bryn
    IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2018, 3 (04): : 559 - 570
  • [37] Comparative Study of Path Tracking Controllers on Low Friction Roads for Autonomous Vehicles
    Lee, Jaepoong
    Yim, Seongjin
    MACHINES, 2023, 11 (03)
  • [38] Spatial Path Tracking Controllers for Autonomous Ground Vehicles: Conventional and Nonconventional Schemes
    Peng Wang
    Di An
    Ning Chen
    Yang Quan Chen
    Guidance,Navigation and Control, 2021, (01) : 53 - 71
  • [39] Self-Scheduling Robust Preview Controllers for Path Tracking and Autonomous Vehicles
    Boyali, Ali
    John, Vijay
    Lyu, Zheming
    Swarn, Rathour
    Mita, Seichi
    2017 11TH ASIAN CONTROL CONFERENCE (ASCC), 2017, : 1829 - 1834
  • [40] Mobile Robot Path tracking using robust controllers
    Moali, Oumima
    Mami, Abdelkader
    Kara, Kamel
    Oussar, Abdelatif
    Nemra, Abdelkrim
    2019 INTERNATIONAL CONFERENCE ON ADVANCED ELECTRICAL ENGINEERING (ICAEE), 2019,