Hierarchical Interpretable Imitation Learning for End-to-End Autonomous Driving

被引:67
|
作者
Teng, Siyu [1 ,2 ]
Chen, Long [3 ,4 ]
Ai, Yunfeng [5 ]
Zhou, Yuanye [6 ]
Xuanyuan, Zhe [1 ]
Hu, Xuemin [7 ]
机构
[1] HKBU United Int Coll, BNU, Zhuhai 999077, Peoples R China
[2] Hong Kong Baptist Univ, Kowloon, Hong Kong 999077, Peoples R China
[3] Chinese Acad Sci, Inst Automat, State Key Lab Management & Control Complex Syst, Beijing 100190, Peoples R China
[4] Waytous Inc Qingdao, Qingdao 266109, Peoples R China
[5] Univ Chinese Acad Sci, Beijing 100049, Peoples R China
[6] Malardalen Univ, S-72214 Vasteras, Sweden
[7] Hubei Univ, Sch Comp Sci & Informat Engn, Wuhan 430062, Peoples R China
来源
基金
中国国家自然科学基金;
关键词
Semantics; Data models; Autonomous vehicles; Cameras; Reinforcement learning; Predictive models; Robustness; Autonomous driving; imitation learning; motion planning; end-to-End driving; interpretability;
D O I
10.1109/TIV.2022.3225340
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
End-to-end autonomous driving provides a simple and efficient framework for autonomous driving systems, which can directly obtain control commands from raw perception data. However, it fails to address stability and interpretability problems in complex urban scenarios. In this paper, we construct a two-stage end-to-end autonomous driving model for complex urban scenarios, named HIIL (Hierarchical Interpretable Imitation Learning), which integrates interpretable BEV mask and steering angle to solve the problems shown above. In Stage One, we propose a pretrained Bird's Eye View (BEV) model which leverages a BEV mask to present an interpretation of the surrounding environment. In Stage Two, we construct an Interpretable Imitation Learning (IIL) model that fuses BEV latent feature from Stage One with an additional steering angle from Pure-Pursuit (PP) algorithm. In the HIIL model, visual information is converted to semantic images by the semantic segmentation network, and the semantic images are encoded to extract the BEV latent feature, which are decoded to predict BEV masks and fed to the IIL as perception data. In this way, the BEV latent feature bridges the BEV and IIL models. Visual information can be supplemented by the calculated steering angle for PP algorithm, speed vector, and location information, thus it could have better performance in complex and terrible scenarios. Our HIIL model meets an urgent requirement for interpretability and robustness of autonomous driving. We validate the proposed model in the CARLA simulator with extensive experiments which show remarkable interpretability, generalization, and robustness capability in unknown scenarios for navigation tasks.
引用
收藏
页码:673 / 683
页数:11
相关论文
共 50 条
  • [31] End-to-end Autonomous Driving: Advancements and Challenges
    Chu, Duan-Feng
    Wang, Ru-Kang
    Wang, Jing-Yi
    Hua, Qiao-Zhi
    Lu, Li-Ping
    Wu, Chao-Zhong
    Zhongguo Gonglu Xuebao/China Journal of Highway and Transport, 2024, 37 (10): : 209 - 232
  • [32] End-to-End Autonomous Driving in CARLA: A Survey
    Al Ozaibi, Youssef
    Hina, Manolo Dulva
    Ramdane-Cherif, Amar
    IEEE ACCESS, 2024, 12 : 146866 - 146900
  • [33] End-to-End Autonomous Driving: Challenges and Frontiers
    Chen, Li
    Wu, Penghao
    Chitta, Kashyap
    Jaeger, Bernhard
    Geiger, Andreas
    Li, Hongyang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (12) : 10164 - 10183
  • [34] End-to-End Differentiable Adversarial Imitation Learning
    Baram, Nir
    Anschel, Oron
    Caspi, Itai
    Mannor, Shie
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [35] End-to-End Imitation Learning for Autonomous Vehicle Steering on a Single-Camera Stream
    van Orden, Thomas
    Visser, Arnoud
    INTELLIGENT AUTONOMOUS SYSTEMS 16, IAS-16, 2022, 412 : 212 - 224
  • [36] Interpretable End-to-End Driving Model for Implicit Scene Understanding
    Sun, Yiyang
    Wang, Xiaonian
    Zhang, Yangyang
    Tang, Jiagui
    Tang, Xiaqiang
    Yao, Jing
    2023 IEEE 26TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS, ITSC, 2023, : 2874 - 2880
  • [37] Recent Advancements in End-to-End Autonomous Driving Using Deep Learning: A Survey
    Chib, Pranav Singh
    Singh, Pravendra
    IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2024, 9 (01): : 103 - 118
  • [38] Towards End-to-End Chase in Urban Autonomous Driving Using Reinforcement Learning
    Kolomanski, Michal
    Sakhai, Mustafa
    Nowak, Jakub
    Wielgosz, Maciej
    INTELLIGENT SYSTEMS AND APPLICATIONS, VOL 3, 2023, 544 : 408 - 426
  • [39] Autonomous driving in traffic with end-to-end vision-based deep learning
    Paniego, Sergio
    Shinohara, Enrique
    Canas, Josemaria
    NEUROCOMPUTING, 2024, 594
  • [40] Stabilization Approaches for Reinforcement Learning-Based End-to-End Autonomous Driving
    Chen, Siyuan
    Wang, Meiling
    Song, Wenjie
    Yang, Yi
    Li, Yujun
    Fu, Mengyin
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (05) : 4740 - 4750