Real-to-Virtual Domain Unification for End-to-End Autonomous Driving

被引:30
|
作者
Yang, Luona [1 ]
Liang, Xiaodan [1 ,2 ]
Wang, Tairui [2 ]
Xing, Eric [1 ,2 ]
机构
[1] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
[2] Petuum Inc, Pittsburgh, PA 15222 USA
来源
关键词
Domain unification; End-to-end autonomous driving;
D O I
10.1007/978-3-030-01225-0_33
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the spectrum of vision-based autonomous driving, vanilla end-to-end models are not interpretable and suboptimal in performance, while mediated perception models require additional intermediate representations such as segmentation masks or detection bounding boxes, whose annotation can be prohibitively expensive as we move to a larger scale. More critically, all prior works fail to deal with the notorious domain shift if we were to merge data collected from different sources, which greatly hinders the model generalization ability. In this work, we address the above limitations by taking advantage of virtual data collected from driving simulators, and present DU-drive, an unsupervised real-to-virtual domain unification framework for end-to-end autonomous driving. It first transforms real driving data to its less complex counterpart in the virtual domain, and then predicts vehicle control commands from the generated virtual image. Our framework has three unique advantages: (1) it maps driving data collected from a variety of source distributions into a unified domain, effectively eliminating domain shift; (2) the learned virtual representation is simpler than the input real image and closer in form to the "minimum sufficient statistic" for the prediction task, which relieves the burden of the compression phase while optimizing the information bottleneck tradeoff and leads to superior prediction performance; (3) it takes advantage of annotated virtual data which is unlimited and free to obtain. Extensive experiments on two public driving datasets and two driving simulators demonstrate the performance superiority and interpretive capability of DU-drive.
引用
收藏
页码:553 / 570
页数:18
相关论文
共 50 条
  • [21] Autonomous Driving Control Using End-to-End Deep Learning
    Lee, Myoung-jae
    Ha, Young-guk
    2020 IEEE INTERNATIONAL CONFERENCE ON BIG DATA AND SMART COMPUTING (BIGCOMP 2020), 2020, : 470 - 473
  • [22] An End-to-End Curriculum Learning Approach for Autonomous Driving Scenarios
    Anzalone, Luca
    Barra, Paola
    Barra, Silvio
    Castiglione, Aniello
    Nappi, Michele
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (10) : 19817 - 19826
  • [23] An End-to-End solution to Autonomous Driving based on Xilinx FPGAd
    Wu, Tianze
    Liu, Weiyi
    Jin, Yongwei
    2019 INTERNATIONAL CONFERENCE ON FIELD-PROGRAMMABLE TECHNOLOGY (ICFPT 2019), 2019, : 427 - 430
  • [24] Explaining Autonomous Driving by Learning End-to-End Visual Attention
    Cultrera, Luca
    Seidenari, Lorenzo
    Becattini, Federico
    Pala, Pietro
    Del Bimbo, Alberto
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 1389 - 1398
  • [25] End-to-end deep learning of lane detection and path prediction for real-time autonomous driving
    Lee, Der-Hau
    Liu, Jinn-Liang
    SIGNAL IMAGE AND VIDEO PROCESSING, 2023, 17 (01) : 199 - 205
  • [26] Segmented Encoding for Sim2Real of RL-based End-to-End Autonomous Driving
    Chung, Seung-Hwan
    Kong, Seung-Hyun
    Cho, Sangjae
    Nahrendra, I. Made Aswin
    2022 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), 2022, : 1290 - 1296
  • [27] End-to-end deep learning of lane detection and path prediction for real-time autonomous driving
    Der-Hau Lee
    Jinn-Liang Liu
    Signal, Image and Video Processing, 2023, 17 : 199 - 205
  • [28] AutoE2E: End-to-End Real-time Middleware for Autonomous Driving Control
    Bai, Yunhao
    Wang, Zejiang
    Wang, Xiaorui
    Wang, Junmin
    2020 IEEE 40TH INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS), 2020, : 1101 - 1111
  • [29] Think Twice before Driving: Towards Scalable Decoders for End-to-End Autonomous Driving
    Jia, Xiaosong
    Wu, Penghao
    Chen, Li
    Xie, Jiangwei
    He, Conghui
    Yan, Junchi
    Li, Hongyang
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 21983 - 21994
  • [30] End-to-end autonomous driving based on the convolution neural network model
    Zhao, Yuanfang
    Chen, Yunli
    2019 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2019, : 419 - 423