Deep Reinforcement Learning Based Adaptation of Pure-Pursuit Path-Tracking Control for Skid-Steered Vehicles

被引:1
|
作者
Joglekar, Ajinkya [1 ]
Sathe, Sumedh [1 ]
Misurati, Nicola [1 ]
Srinivasan, Srivatsan [1 ]
Schmid, Matthias J. [1 ]
Krovi, Venkat [1 ]
机构
[1] Clemson Univ, Dept Automot Engn, Greenville, SC 29607 USA
来源
IFAC PAPERSONLINE | 2022年 / 55卷 / 37期
关键词
Adaptive control; Path tracking; Offroad systems; Pure pursuit; Deep reinforcement learning; Data-driven control; MODEL-PREDICTIVE CONTROL; IMPLEMENTATION;
D O I
10.1016/j.ifacol.2022.11.216
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The growing need for autonomous vehicles in the offroad space raises certain complexities that need to be considered more rigorously in comparison to onroad vehicle automation. Popular path control frameworks in onroad autonomy deployments such as the pure-pursuit controller use geometric and kinematic motion models to generate reference trajectories. However in the offroad settings these controllers, despite their merits (low design and computation requirements), could compute dynamically infeasible trajectories as several of the nominal assumptions made by these models don't hold true when operating in a 2.5D terrain. Outside of the notable challenges such as uncertainties and non-linearities/disturbances introduced by the unknown/unmapped 2.5D terrains, additional complexities arise from the use of vehicle architectures such as the skid-steer that experience lateral skidding for achieving simple curvilinear motion. Additionally, linear models of skid-steer vehicles often consist of high modeling uncertainty which renders traditional linear optimal and robust control techniques inadequate given their sensitivity to modeling errors. Nonlinear MPC has emerged as an upgrade, but needs to overcome real-time deployment challenges (including slow sampling time, design complexity, and limited computational resources). This provides an unique opportunity to utilize data-driven adaptive control methods in tailored application spaces to implicitly learn and hence compensate for the unmodeled aspects of the robot operation. In this study, we build an adaptive control framework called Deep Reinforcement Learning based Adaptive Pure Pursuit (DRAPP) where the base structure is that of a geometric Pure-Pursuit (PP) algorithm which is adapted through a policy learned using Deep Reinforcement Learning (DRL). An additional law that enforces a mechanism to account for the rough terrain is added to the DRL policy to prioritize smoother reference-trajectory generation (and thereby more feasible trajectories for lower-level controllers). The adaptive framework converges quickly and generates smoother references relative to a pure 2D -kinematic path tracking controller. This work includes extensive simulations and a bench marking of the DRAPP framework against Nonlinear Model Predictive Control (NMPC) that is an alternate popular choice in literature for this application. Copyright (c) 2022 The Authors. This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0)
引用
收藏
页码:400 / 407
页数:8
相关论文
共 50 条
  • [21] Path Tracking Control for Four-Wheel Independent Steering and Driving Vehicles Based on Improved Deep Reinforcement Learning
    Hua, Xia
    Zhang, Tengteng
    Cheng, Xiangle
    Ning, Xiaobin
    TECHNOLOGIES, 2024, 12 (11)
  • [22] Path-Tracking and Lateral Stabilization for Automated Vehicles via Learning-Based Robust Model Predictive Control
    Wu, Xitao
    Wei, Chao
    Zhang, Hao
    Jiang, Chaoyang
    Hu, Chuan
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (12) : 18571 - 18583
  • [23] Mixed logical dynamic based path-tracking model predictive control for autonomous vehicles
    Fu, Tengfei
    Jing, Houhua
    Zhou, Hongliang
    Liu, Zhiyuan
    2022 IEEE 25TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2022, : 189 - 196
  • [24] Path-Tracking Considering Yaw Stability With Passivity-Based Control for Autonomous Vehicles
    Ma, Yan
    Chen, Jian
    Wang, Junmin
    Xu, Yanchuan
    Wang, Yuexuan
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (07) : 8736 - 8746
  • [25] Self-learning Path-tracking Control of Autonomous Vehicles Using Kernel-based Approximate Dynamic Programming
    Xu, Xin
    Zhang, Hongyu
    Dai, Bin
    He, Han-gen
    2008 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-8, 2008, : 2182 - 2189
  • [26] Adaptive MPC path-tracking controller based on reinforcement learning and preview-based PID controller
    Feng, Kun
    Li, Xu
    Li, Wenli
    PROCEEDINGS OF THE INSTITUTION OF MECHANICAL ENGINEERS PART D-JOURNAL OF AUTOMOBILE ENGINEERING, 2024,
  • [27] Path Tracking Control of Tracked Paver Based on Improved Pure Pursuit Algorithm
    Wang, Shuai
    Fu, Shanshan
    Li, Bin
    Wang, Shoukun
    2021 PROCEEDINGS OF THE 40TH CHINESE CONTROL CONFERENCE (CCC), 2021, : 4187 - 4192
  • [28] MME-EKF-Based Path-Tracking Control of Autonomous Vehicles Considering Input Saturation
    Hu, Chuan
    Wang, Zhenfeng
    Taghavifar, Hamid
    Na, Jing
    Qin, Yechen
    Guo, Jinghua
    Wei, Chongfeng
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2019, 68 (06) : 5246 - 5259
  • [29] Trajectory tracking control of vectored thruster autonomous underwater vehicles based on deep reinforcement learning
    Liu, Tao
    Zhao, Jintao
    Hu, Yuli
    Huang, Junhao
    SHIPS AND OFFSHORE STRUCTURES, 2024,
  • [30] A Beam Tracking Scheme Based on Deep Reinforcement Learning for Multiple Vehicles
    Cheng, Binyao
    Zhao, Long
    He, Zibo
    Zhang, Ping
    COMMUNICATIONS AND NETWORKING (CHINACOM 2021), 2022, : 291 - 305