Probabilistic Prediction of Interactive Driving Behavior via Hierarchical Inverse Reinforcement Learning

被引:0
|
作者
Sun, Liting [1 ]
Zhan, Wei [1 ]
Tomizuka, Masayoshi [1 ]
机构
[1] Univ Calif Berkeley, Dept Mech Engn, Berkeley, CA 94720 USA
关键词
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Autonomous vehicles (AVs) are on the road. To safely and efficiently interact with other road participants, AVs have to accurately predict the behavior of surrounding vehicles and plan accordingly. Such prediction should be probabilistic, to address the uncertainties in human behavior. Such prediction should also be interactive, since the distribution over all possible trajectories of the predicted vehicle depends not only on historical information, but also on future plans of other vehicles that interact with it. To achieve such interaction-aware predictions, we propose a probabilistic prediction approach based on hierarchical inverse reinforcement learning (IRL). First, we explicitly consider the hierarchical trajectory-generation process of human drivers involving both discrete and continuous driving decisions. Based on this, the distribution over all future trajectories of the predicted vehicle is formulated as a mixture of distributions partitioned by the discrete decisions. Then we apply IRL hierarchically to learn the distributions from real human demonstrations. A case study for the ramp-merging driving scenario is provided. The quantitative results show that the proposed approach can accurately predict both the discrete driving decisions such as yield or pass as well as the continuous trajectories.
引用
收藏
页码:2111 / 2117
页数:7
相关论文
共 50 条
  • [41] Regularising neural networks for future trajectory prediction via inverse reinforcement learning framework
    Choi, Dooseop
    Min, Kyoungwook
    Choi, Jeongdan
    IET COMPUTER VISION, 2020, 14 (05) : 192 - 200
  • [42] Modular inverse reinforcement learning for visuomotor behavior
    Rothkopf, Constantin A.
    Ballard, Dana H.
    BIOLOGICAL CYBERNETICS, 2013, 107 (04) : 477 - 490
  • [43] Haptic Assistance via Inverse Reinforcement Learning
    Scobee, Dexter R. R.
    Royo, Vicenc Rubies
    Tomlin, Claire J.
    Sastry, S. Shankar
    2018 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2018, : 1510 - 1517
  • [44] Modular inverse reinforcement learning for visuomotor behavior
    Constantin A. Rothkopf
    Dana H. Ballard
    Biological Cybernetics, 2013, 107 : 477 - 490
  • [45] Towards Interpretable Deep Reinforcement Learning Models via Inverse Reinforcement Learning
    Xie, Yuansheng
    Vosoughi, Soroush
    Hassanpour, Saeed
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 5067 - 5074
  • [46] HIERARCHICAL CACHING VIA DEEP REINFORCEMENT LEARNING
    Sadeghi, Alireza
    Wang, Gang
    Giannakis, Georgios B.
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 3532 - 3536
  • [47] Personalized Car Following for Autonomous Driving with Inverse Reinforcement Learning
    Zhao, Zhouqiao
    Wang, Ziran
    Han, Kyungtae
    Gupta, Rohit
    Tiwari, Prashant
    Wu, Guoyuan
    Barth, Matthew J.
    2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2022), 2022, : 2891 - 2897
  • [48] Multiscale Anticipatory Behavior by Hierarchical Reinforcement Learning
    Rungger, Matthias
    Ding, Hao
    Stursberg, Olaf
    ANTICIPATORY BEHAVIOR IN ADAPTIVE LEARNING SYSTEMS: FROM PSYCHOLOGICAL THEORIES TO ARTIFICIAL COGNITIVE SYSTEMS, 2009, 5499 : 301 - 320
  • [49] Video Captioning via Hierarchical Reinforcement Learning
    Wang, Xin
    Chen, Wenhu
    Wu, Jiawei
    Wang, Yuan-Fang
    Wang, William Yang
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 4213 - 4222
  • [50] DiPA: Probabilistic Multi-Modal Interactive Prediction for Autonomous Driving
    Knittel, Anthony
    Hawasly, Majd
    Albrecht, Stefano V.
    Redford, John
    Ramamoorthy, Subramanian
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (08) : 4887 - 4894