Deep Reinforcement Learning Based Adaptation of Pure-Pursuit Path-Tracking Control for Skid-Steered Vehicles

被引:1
|
作者
Joglekar, Ajinkya [1 ]
Sathe, Sumedh [1 ]
Misurati, Nicola [1 ]
Srinivasan, Srivatsan [1 ]
Schmid, Matthias J. [1 ]
Krovi, Venkat [1 ]
机构
[1] Clemson Univ, Dept Automot Engn, Greenville, SC 29607 USA
来源
IFAC PAPERSONLINE | 2022年 / 55卷 / 37期
关键词
Adaptive control; Path tracking; Offroad systems; Pure pursuit; Deep reinforcement learning; Data-driven control; MODEL-PREDICTIVE CONTROL; IMPLEMENTATION;
D O I
10.1016/j.ifacol.2022.11.216
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The growing need for autonomous vehicles in the offroad space raises certain complexities that need to be considered more rigorously in comparison to onroad vehicle automation. Popular path control frameworks in onroad autonomy deployments such as the pure-pursuit controller use geometric and kinematic motion models to generate reference trajectories. However in the offroad settings these controllers, despite their merits (low design and computation requirements), could compute dynamically infeasible trajectories as several of the nominal assumptions made by these models don't hold true when operating in a 2.5D terrain. Outside of the notable challenges such as uncertainties and non-linearities/disturbances introduced by the unknown/unmapped 2.5D terrains, additional complexities arise from the use of vehicle architectures such as the skid-steer that experience lateral skidding for achieving simple curvilinear motion. Additionally, linear models of skid-steer vehicles often consist of high modeling uncertainty which renders traditional linear optimal and robust control techniques inadequate given their sensitivity to modeling errors. Nonlinear MPC has emerged as an upgrade, but needs to overcome real-time deployment challenges (including slow sampling time, design complexity, and limited computational resources). This provides an unique opportunity to utilize data-driven adaptive control methods in tailored application spaces to implicitly learn and hence compensate for the unmodeled aspects of the robot operation. In this study, we build an adaptive control framework called Deep Reinforcement Learning based Adaptive Pure Pursuit (DRAPP) where the base structure is that of a geometric Pure-Pursuit (PP) algorithm which is adapted through a policy learned using Deep Reinforcement Learning (DRL). An additional law that enforces a mechanism to account for the rough terrain is added to the DRL policy to prioritize smoother reference-trajectory generation (and thereby more feasible trajectories for lower-level controllers). The adaptive framework converges quickly and generates smoother references relative to a pure 2D -kinematic path tracking controller. This work includes extensive simulations and a bench marking of the DRAPP framework against Nonlinear Model Predictive Control (NMPC) that is an alternate popular choice in literature for this application. Copyright (c) 2022 The Authors. This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0)
引用
收藏
页码:400 / 407
页数:8
相关论文
共 50 条
  • [31] Research on Path Tracking Control Method of Unmanned Surface Vehicle Based on Deep Reinforcement Learning
    Guo, Rui
    Yuan, Wei
    INTERNATIONAL SYMPOSIUM ON ARTIFICIAL INTELLIGENCE AND ROBOTICS 2021, 2021, 11884
  • [32] Fuzzy Vector Field Orientation Feedback Control-Based Slip Compensation for Trajectory Tracking Control of a Four Track Wheel Skid-Steered Mobile Robot
    Ha, Xuan Vinh
    Ha, Cheolkeun
    Lee, Jewon
    INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS, 2013, 10
  • [33] Deep reinforcement learning based path tracking controller for autonomous vehicle
    Chen, I-Ming
    Chan, Ching-Yao
    PROCEEDINGS OF THE INSTITUTION OF MECHANICAL ENGINEERS PART D-JOURNAL OF AUTOMOBILE ENGINEERING, 2021, 235 (2-3) : 541 - 551
  • [34] A Research on Manipulator-Path Tracking Based on Deep Reinforcement Learning
    Zhang, Pengyu
    Zhang, Jie
    Kan, Jiangming
    APPLIED SCIENCES-BASEL, 2023, 13 (13):
  • [35] Deep reinforcement learning based control for Autonomous Vehicles in CARLA
    Perez-Gil, Oscar
    Barea, Rafael
    Lopez-Guillen, Elena
    Bergasa, Luis M.
    Gomez-Huelamo, Carlos
    Gutierrez, Rodrigo
    Diaz-Diaz, Alejandro
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (03) : 3553 - 3576
  • [36] A Control Strategy of Autonomous Vehicles based on Deep Reinforcement Learning
    Xia, Wei
    Li, Huiyun
    Li, Baopu
    PROCEEDINGS OF 2016 9TH INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DESIGN (ISCID), VOL 2, 2016, : 198 - 201
  • [37] Deep reinforcement learning based control for Autonomous Vehicles in CARLA
    Óscar Pérez-Gil
    Rafael Barea
    Elena López-Guillén
    Luis M. Bergasa
    Carlos Gómez-Huélamo
    Rodrigo Gutiérrez
    Alejandro Díaz-Díaz
    Multimedia Tools and Applications, 2022, 81 : 3553 - 3576
  • [38] Multi-Kernel Online Reinforcement Learning for Path Tracking Control of Intelligent Vehicles
    Liu, Jiahang
    Huang, Zhenhua
    Xu, Xin
    Zhang, Xinglong
    Sun, Shiliang
    Li, Dazi
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2021, 51 (11): : 6962 - 6975
  • [39] MPC based optimal path-tracking control strategy for 4WS4WD vehicles
    Tan, Qifan
    Dong, Lijing
    Yan, Hao
    IECON 2017 - 43RD ANNUAL CONFERENCE OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY, 2017, : 5876 - 5881
  • [40] Path Tracking Control of Wheeled Mobile Robot Based on Improved Pure Pursuit Algorithm
    Sun Qinpeng
    Li Meng
    Cheng Jin
    Wang Zhonghua
    Liu Bin
    Tai Jiaxiang
    2019 CHINESE AUTOMATION CONGRESS (CAC2019), 2019, : 4239 - 4244