Spatiotemporal Costmap Inference for MPC Via Deep Inverse Reinforcement Learning

被引:15
|
作者
Lee, Keuntaek [1 ]
Isele, David [2 ]
Theodorou, Evangelos A. [3 ]
Bae, Sangjae [2 ]
机构
[1] Georgia Inst Technol, Dept Elect & Comp Engn, Atlanta, GA 30318 USA
[2] Honda Res Inst USA Inc, Div Res, San Jose, CA 95110 USA
[3] Georgia Inst Technol, Sch Aerosp Engn, Atlanta, GA 30318 USA
关键词
Learning from demonstration; reinforcement learning; optimization and optimal control; motion and path planning; autonomous vehicle navigation;
D O I
10.1109/LRA.2022.3146635
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
It can he difficult to autonomously produce driver behavior so that it appears natural to other traffic participants. Through Inverse Reinforcement Learning (IRL), we can automate this process by learning the underlying reward function from human demonstrations. We propose a new IRL algorithm that learns a goal-conditioned spatio-temporal reward function. The resulting costmap is used by Model Predictive Controllers (MPCs) to perform a task without any hand-designing or hand-tuning of the cost function. We evaluate our proposed Goal-conditioned SpatioTemporal Zeroing Maximum Entropy Deep IRL (GSTZ)-MEDIRL framework together with MPC in the CARLA simulator for autonomous driving, lane keeping, and lane changing tasks in a challenging dense traffic highway scenario. Our proposed methods show higher success rates compared to other baseline methods including behavior cloning, state-of-the-art RL policies, and MPC with a learning-based behavior prediction model.
引用
收藏
页码:3194 / 3201
页数:8
相关论文
共 50 条
  • [31] Generative Inverse Deep Reinforcement Learning for Online Recommendation
    Chen, Xiaocong
    Yao, Lina
    Sun, Aixin
    Wang, Xianzhi
    Xu, Xiwei
    Zhu, Liming
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 201 - 210
  • [32] Learning Fairness from Demonstrations via Inverse Reinforcement Learning
    Blandin, Jack
    Kash, Ian
    PROCEEDINGS OF THE 2024 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, ACM FACCT 2024, 2024, : 51 - 61
  • [33] Learning Tasks in Intelligent Environments via Inverse Reinforcement Learning
    Shah, Syed Ihtesham Hussain
    Coronato, Antonio
    2021 17TH INTERNATIONAL CONFERENCE ON INTELLIGENT ENVIRONMENTS (IE), 2021,
  • [34] Methodologies for Imitation Learning via Inverse Reinforcement Learning: A Review
    Zhang K.
    Yu Y.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2019, 56 (02): : 254 - 261
  • [35] Dual MPC with Reinforcement Learning
    Morinelly, Juan E.
    Ydstie, B. Erik
    IFAC PAPERSONLINE, 2016, 49 (07): : 266 - 271
  • [36] Stochastic intervention for causal inference via reinforcement learning
    Duong, Tri Dung
    Li, Qian
    Xu, Guandong
    NEUROCOMPUTING, 2022, 482 : 40 - 49
  • [37] Multi-layer Control Architecture for Unsignalized Intersection Management via Nonlinear MPC and Deep Reinforcement Learning
    Hamouda, Ahmed H.
    Mahfouz, Dalia M.
    Elias, Catherine M.
    Shehata, Omar M.
    2021 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC), 2021, : 1990 - 1996
  • [38] Drones Objective Inference Using Policy Error Inverse Reinforcement Learning
    Perrusquia, Adolfo
    Guo, Weisi
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025, 36 (01) : 1329 - 1340
  • [39] MEDIRL: Predicting the Visual Attention of Drivers via Maximum Entropy Deep Inverse Reinforcement Learning
    Baee, Sonia
    Pakdamanian, Erfan
    Kim, Inki
    Feng, Lu
    Ordonez, Vicente
    Barnes, Laura
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 13158 - 13168
  • [40] Combining long and short spatiotemporal reasoning for deep reinforcement learning
    Liu, Huiling
    Liu, Peng
    Bai, Chenjia
    NEUROCOMPUTING, 2025, 619