Visual Explanation by Attention Branch Network for End-to-end Learning-based Self-driving

被引:0
|
作者
Mori, Keisuke [1 ]
Fukui, Hiroshi [1 ]
Murase, Takuya [1 ]
Hirakawa, Tsubasa [1 ]
Yamashita, Takayoshi [1 ]
Fujiyoshi, Hironobu [1 ]
机构
[1] Chubu Univ, Kasugai, Aichi 4878501, Japan
关键词
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Self-driving decides an appropriate control considering the surrounding environment. To this end, self-driving control methods by using a convolutional neural network (CNN) have been studied, which directly input the vehicle-mounted camera image to a network and output a steering directory. However, if we need to control not only steering but also throttle, it is necessary to grasp the state of the car itself in addition to the surrounding environment. Moreover, in order to use CNNs for critical applications such as self-driving, it is important to analyze where the network focuses on the image and to understand the decision making. In this work, we propose a method to solve these problems. First, to control both steering and throttle simultaneously, we propose using the current vehicle speed as the state of the car itself. Second, we introduce an attention branch network (ABN) architecture to a self-driving model, which enables visually analyzing the reason of the self-driving decision making by using an attention map. Experimental results with a driving simulator demonstrate that our method controls a car stably, and we can analyze the decision making by using the attention map.
引用
收藏
页码:1577 / 1582
页数:6
相关论文
共 50 条
  • [41] Chauffeur: Benchmark Suite for Design and End-to-End Analysis of Self-Driving Vehicles on Embedded Systems
    Maity, Biswadip
    Yi, Saehanseul
    Seo, Dongjoo
    Cheng, Leming
    Lim, Sung-Soo
    Kim, Jong-Chan
    Donyanavard, Bryan
    Dutt, Nikil
    ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, 2021, 20 (05)
  • [42] More Persuasive Explanation Method for End-to-End Driving Models
    Zhang, Chenkai
    Deguchi, Daisuke
    Okafuji, Yuki
    Murase, Hiroshi
    IEEE ACCESS, 2023, 11 : 4270 - 4282
  • [43] Attention-based end-to-end image defogging network
    Yang, Yan
    Zhang, Chen
    Jiang, Peipei
    Yue, Hui
    ELECTRONICS LETTERS, 2020, 56 (15) : 759 - +
  • [44] Gated End-to-End Memory Network Based on Attention Mechanism
    Zhou, Bin
    Dang, Xin
    2018 INTERNATIONAL CONFERENCE ON ORANGE TECHNOLOGIES (ICOT), 2018,
  • [45] End-to-end Spatiotemporal Attention Model for Autonomous Driving
    Zhao, Ruijie
    Zhang, Yanxin
    Huang, Zhiqing
    Yin, Chenkun
    PROCEEDINGS OF 2020 IEEE 4TH INFORMATION TECHNOLOGY, NETWORKING, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (ITNEC 2020), 2020, : 2649 - 2653
  • [46] Attention Based End-to-End Network for Short Video Classification
    Zhu, Hui
    Zou, Chao
    Wang, Zhenyu
    Xu, Kai
    Huang, Zihao
    2022 18TH INTERNATIONAL CONFERENCE ON MOBILITY, SENSING AND NETWORKING, MSN, 2022, : 490 - 494
  • [47] Learning to localize image forgery using end-to-end attention network
    Ganapathi, Iyyakutti Iyappan
    Javed, Sajid
    Ali, Syed Sadaf
    Mahmood, Arif
    Vu, Ngoc-Son
    Werghi, Naoufel
    NEUROCOMPUTING, 2022, 512 : 25 - 39
  • [48] End-to-End Learning for Video Frame Compression with Self-Attention
    Zou, Nannan
    Zhang, Honglei
    Cricri, Francesco
    Tavakoli, Hamed R.
    Lainema, Jani
    Aksu, Emre
    Hannuksela, Miska
    Rahtu, Esa
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 580 - 584
  • [49] SDC-Net: End-to-End Multitask Self-Driving Car Camera Cocoon IoT-Based System
    Abdou, Mohammed
    Kamal, Hanan Ahmed
    SENSORS, 2022, 22 (23)
  • [50] Scaling Vision-based End-to-End Autonomous Driving with Multi-View Attention Learning
    Xiao, Yi
    Codevilla, Felipe
    Porres, Diego
    Lopez, Antonio M.
    2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, IROS, 2023, : 1586 - 1593