More Persuasive Explanation Method for End-to-End Driving Models

被引:2
|
作者
Zhang, Chenkai [1 ]
Deguchi, Daisuke [1 ]
Okafuji, Yuki [2 ]
Murase, Hiroshi [1 ]
机构
[1] Nagoya Univ, Grad Sch Informat, Nagoya 4648601, Japan
[2] CyberAgent Inc, AI Lab, Tokyo 1506121, Japan
基金
日本科学技术振兴机构; 日本学术振兴会;
关键词
Task analysis; Predictive models; Pipelines; Autonomous vehicles; Computational modeling; Autonomous driving; Convolutional neural networks; convolutional neural network; end-to-end model; explainability;
D O I
10.1109/ACCESS.2023.3235739
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the rapid development of autonomous driving technology, a variety of high-performance end-to-end driving models (E2EDMs) are being proposed. In order to understand the computational methods of E2EDMs, pixel-level explanations methods are used to obtain the explanations of the E2EDMs. However, little attention has been paid to the excellence of the explanations of E2EDMs. Therefore, in order to build trustworthy E2EDMs, we focus on improving the persuasibility of the explanations of E2EDMs. We propose an object-level explanation method (main approach) for E2EDMs, which masks the objects in the image and then treats the change in the prediction result as the importance of the objects, then we explain the E2EDM by the importance of each object. To further validate the effectiveness of object-level explanations, we propose another approach (validation approach), which trains E2EDMs with object information as input and generates the importance of objects using general explanation methods. Both approaches generate object-level explanations, in order to compare these object-level explanations with traditional pixel-level explanations, we propose experimental methods to measure the persuasibility of explanations of E2EDMs through a subjective and objective method. The subjective method evaluates persuasibility based on the extent to which participants think the importance of features indicated by the explanations is correct. The objective method evaluates the persuasibility based on the human annotation similarity between provided with only the important part of images and provided with the complete images. The experimental results show that the object-level explanations are more persuasive than the traditional pixel-level explanations.
引用
收藏
页码:4270 / 4282
页数:13
相关论文
共 50 条
  • [31] A Review of End-to-End Autonomous Driving in Urban Environments
    Coelho, Daniel
    Oliveira, Miguel
    IEEE ACCESS, 2022, 10 : 75296 - 75311
  • [32] Telomeres: The molecular events driving end-to-end fusions
    Bertuch, AA
    CURRENT BIOLOGY, 2002, 12 (21) : R738 - R740
  • [33] A Survey of End-to-End Driving: Architectures and Training Methods
    Tampuu, Ardi
    Matiisen, Tambet
    Semikin, Maksym
    Fishman, Dmytro
    Muhammad, Naveed
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (04) : 1364 - 1384
  • [34] End-to-End Federated Learning for Autonomous Driving Vehicles
    Zhang, Hongyi
    Bosch, Jan
    Olsson, Helena Holmstrom
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [35] End-to-End Race Driving with Deep Reinforcement Learning
    Jaritz, Maximilian
    de Charette, Raoul
    Toromanoff, Marin
    Perot, Etienne
    Nashashibi, Fawzi
    2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2018, : 2070 - 2075
  • [36] SEECAD: Semantic End-to-End Communication for Autonomous Driving
    Ribouh, Soheyb
    Hadid, Abdenour
    2024 35TH IEEE INTELLIGENT VEHICLES SYMPOSIUM, IEEE IV 2024, 2024, : 1808 - 1813
  • [37] End-to-end Spatiotemporal Attention Model for Autonomous Driving
    Zhao, Ruijie
    Zhang, Yanxin
    Huang, Zhiqing
    Yin, Chenkun
    PROCEEDINGS OF 2020 IEEE 4TH INFORMATION TECHNOLOGY, NETWORKING, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (ITNEC 2020), 2020, : 2649 - 2653
  • [38] ReasonNet: End-to-End Driving with Temporal and Global Reasoning
    Shao, Hao
    Wang, Letian
    Chen, Ruobing
    Waslander, Steven L.
    Li, Hongsheng
    Liu, Yu
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 13723 - 13733
  • [39] Drive Anywhere: Generalizable End-to-end Autonomous Driving with Multi-modal Foundation Models
    Wang, Tsun-Hsuan
    Maalouf, Alaa
    Xia, Wei
    Bao, Yutong
    Amini, Alexander
    Rosman, Guy
    Karaman, Sertac
    Rus, Daniela
    2024 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2024, 2024, : 6687 - 6694
  • [40] ON THE METHOD OF INTESTINAL END-TO-END ANASTOMOSES
    SIGAL, MZ
    RAMAZANOV, MR
    VESTNIK KHIRURGII IMENI I I GREKOVA, 1987, 139 (09): : 119 - 121