Guiding Attention in End-to-End Driving Models

被引:0
|
作者
Porres, Diego [1 ]
Xiao, Yi [1 ]
Villalonga, Gabriel [1 ]
Levy, Alexandre [1 ]
Lopez, Antonio M. [1 ,2 ]
机构
[1] Univ Autonoma Barcelona UAB, Comp Vis Ctr CVC, Barcelona, Spain
[2] Univ Autonoma Barcelona UAB, Dept Ciencies Computac, Barcelona, Spain
关键词
D O I
10.1109/IV55156.2024.10588598
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Vision-based end-to-end driving models trained by imitation learning can lead to affordable solutions for autonomous driving. However, training these well-performing models usually requires a huge amount of data, while still lacking explicit and intuitive activation maps to reveal the inner workings of these models while driving. In this paper, we study how to guide the attention of these models to improve their driving quality and obtain more intuitive activation maps by adding a loss term during training using salient semantic maps. In contrast to previous work, our method does not require these salient semantic maps to be available during testing time, as well as removing the need to modify the model's architecture to which it is applied. We perform tests using perfect and noisy salient semantic maps with encouraging results in both, the latter of which is inspired by possible errors encountered with real data. Using CIL++ as a representative state-of-the-art model and the CARLA simulator with its standard benchmarks, we conduct experiments that show the effectiveness of our method in training better autonomous driving models, especially when data and computational resources are scarce.
引用
收藏
页码:2353 / 2360
页数:8
相关论文
共 50 条
  • [41] STREAMING ATTENTION-BASED MODELS WITH AUGMENTED MEMORY FOR END-TO-END SPEECH RECOGNITION
    Yeh, Ching-Feng
    Wang, Yongqiang
    Shi, Yangyang
    Wu, Chunyang
    Zhang, Frank
    Chan, Julian
    Seltzer, Michael L.
    2021 IEEE SPOKEN LANGUAGE TECHNOLOGY WORKSHOP (SLT), 2021, : 8 - 14
  • [42] Investigating Joint CTC-Attention Models for End-to-End Russian Speech Recognition
    Markovnikov, Nikita
    Kipyatkova, Irina
    SPEECH AND COMPUTER, SPECOM 2019, 2019, 11658 : 337 - 347
  • [43] STREAMING END-TO-END SPEECH RECOGNITION WITH JOINT CTC-ATTENTION BASED MODELS
    Moritz, Niko
    Hori, Takaaki
    Le Roux, Jonathan
    2019 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU 2019), 2019, : 936 - 943
  • [44] End-to-end Driving via Conditional Imitation Learning
    Codevilla, Felipe
    Mueller, Matthias
    Lopez, Antonio
    Koltun, Vladlen
    Dosovitskiy, Alexey
    2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2018, : 4693 - 4700
  • [45] End-to-End Urban Autonomous Driving With Safety Constraints
    Hou, Changmeng
    Zhang, Wei
    IEEE ACCESS, 2024, 12 : 132198 - 132209
  • [46] A Review of End-to-End Autonomous Driving in Urban Environments
    Coelho, Daniel
    Oliveira, Miguel
    IEEE ACCESS, 2022, 10 : 75296 - 75311
  • [47] Telomeres: The molecular events driving end-to-end fusions
    Bertuch, AA
    CURRENT BIOLOGY, 2002, 12 (21) : R738 - R740
  • [48] A Survey of End-to-End Driving: Architectures and Training Methods
    Tampuu, Ardi
    Matiisen, Tambet
    Semikin, Maksym
    Fishman, Dmytro
    Muhammad, Naveed
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (04) : 1364 - 1384
  • [49] End-to-End Federated Learning for Autonomous Driving Vehicles
    Zhang, Hongyi
    Bosch, Jan
    Olsson, Helena Holmstrom
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [50] End-to-End Race Driving with Deep Reinforcement Learning
    Jaritz, Maximilian
    de Charette, Raoul
    Toromanoff, Marin
    Perot, Etienne
    Nashashibi, Fawzi
    2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2018, : 2070 - 2075