Learning an Interpretable Model for Driver Behavior Prediction with Inductive Biases

被引:2
|
作者
Arbabi, Salar [1 ]
Tavernini, Davide [1 ]
Fallah, Saber [1 ]
Bowden, Richard [2 ]
机构
[1] Univ Surrey, Ctr Automot Engn, Guildford GU2 7XH, Surrey, England
[2] Univ Surrey, Ctr Vis Speech & Signal Proc, Guildford GU2 7XH, Surrey, England
来源
2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) | 2022年
基金
英国工程与自然科学研究理事会;
关键词
D O I
10.1109/IROS47612.2022.9981142
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
To plan safe maneuvers and act with foresight, autonomous vehicles must be capable of accurately predicting the uncertain future. In the context of autonomous driving, deep neural networks have been successfully applied to learning predictive models of human driving behavior from data. However, the predictions suffer from cascading errors, resulting in large inaccuracies over long time horizons. Furthermore, the learned models are black boxes, and thus it is often unclear how they arrive at their predictions. In contrast, rule-based models-which are informed by human experts-maintain long-term coherence in their predictions and are human-interpretable. However, such models often lack the sufficient expressiveness needed to capture complex real-world dynamics. In this work, we begin to close this gap by embedding the Intelligent Driver Model, a popular hand-crafted driver model, into deep neural networks. Our model's transparency can offer considerable advantages, e.g., in debugging the model and more easily interpreting its predictions. We evaluate our approach on a simulated merging scenario, showing that it yields a robust model that is end-to-end trainable and provides greater transparency at no cost to the model's predictive accuracy.
引用
收藏
页码:3940 / 3947
页数:8
相关论文
共 50 条
  • [1] On inductive biases for the robust and interpretable prediction of drug concentrations using deep compartment models
    Janssen, Alexander
    Bennis, Frank C.
    Cnossen, Marjon H.
    Mathot, Ron A. A.
    JOURNAL OF PHARMACOKINETICS AND PHARMACODYNAMICS, 2024, 51 (04) : 355 - 366
  • [2] ATTRIBUTIONAL BIASES AND DRIVER BEHAVIOR
    BAXTER, JS
    MACRAE, CN
    MANSTEAD, ASR
    STRADLING, SG
    PARKER, D
    SOCIAL BEHAVIOUR, 1990, 5 (03): : 185 - 192
  • [3] Compositional inductive biases in function learning
    Schulz, Eric
    Tenenbaum, Joshua B.
    Duvenaud, David
    Speekenbrink, Maarten
    Gershman, Samuel J.
    COGNITIVE PSYCHOLOGY, 2017, 99 : 44 - 79
  • [4] A Novel Interpretable Deep Learning Model for Ozone Prediction
    Chen, Xingguo
    Li, Yang
    Xu, Xiaoyan
    Shao, Min
    APPLIED SCIENCES-BASEL, 2023, 13 (21):
  • [5] Interpretable Machine Learning Models for Prediction of UHPC Creep Behavior
    Zhu, Peng
    Cao, Wenshuo
    Zhang, Lianzhen
    Zhou, Yongjun
    Wu, Yuching
    Ma, Zhongguo John
    BUILDINGS, 2024, 14 (07)
  • [6] HUMAN INDUCTIVE BIASES FOR AVERSIVE CONTINUAL LEARNING - A HIERARCHICAL BAYESIAN NONPARAMETRIC MODEL
    Pisupati, Sashank
    Berwian, Isabel M.
    Chiu, Jamie
    Ren, Yongjing
    Niv, Yael
    CONFERENCE ON LIFELONG LEARNING AGENTS, VOL 232, 2023, 232 : 337 - 346
  • [7] Interpretable Deep Learning Prediction Model for Compressive Strength of Concrete
    Zhang, Wei-Qi
    Wang, Hui-Ming
    Dongbei Daxue Xuebao/Journal of Northeastern University, 2024, 45 (05): : 738 - 744
  • [8] An Interpretable Machine Learning Model for Accurate Prediction of Sepsis in the ICU
    Nemati, Shamim
    Holder, Andre
    Razmi, Fereshteh
    Stanley, Matthew D.
    Clifford, Gari D.
    Buchman, Timothy G.
    CRITICAL CARE MEDICINE, 2018, 46 (04) : 547 - 553
  • [9] A Study of Inductive Biases for Unsupervised Speech Representation Learning
    Boulianne, Gilles
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2020, 28 : 2781 - 2795
  • [10] Understanding Contrastive Learning Requires Incorporating Inductive Biases
    Saunshi, Nikunj
    Ash, Jordan T.
    Goel, Surbhi
    Misra, Dipendra
    Zhang, Cyril
    Arora, Sanjeev
    Kakade, Sham
    Krishnamurthy, Akshay
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022, : 19250 - 19286