OmniDet: Surround View Cameras Based Multi-Task Visual Perception Network for Autonomous Driving

被引:51
|
作者
Kumar, Varun Ravi [1 ,2 ]
Yogamani, Senthil [3 ]
Rashed, Hazem [4 ]
Sitsu, Ganesh [3 ]
Witt, Christian [1 ]
Leang, Isabelle [5 ]
Milz, Stefan [2 ]
Maeder, Patrick [2 ]
机构
[1] Valeo, D-96317 Kronach, Germany
[2] TU Ilmenau, D-98693 Ilmenau, Germany
[3] Valeo, Galway, Ireland
[4] Valeo, Giza, Egypt
[5] Valeo, Chatellerault, France
关键词
Autonomous systems; autonomous vehicles; computer vision; image reconstruction and distance learning;
D O I
10.1109/LRA.2021.3062324
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Surround View fisheye cameras are commonly deployed in automated driving for 360 degrees near-field sensing around the vehicle. This work presents a multi-task visual perception network on unrectified fisheye images to enable the vehicle to sense its surrounding environment. It consists of six primary tasks necessary for an autonomous driving system: depth estimation, visual odometry, semantic segmentation, motion segmentation, object detection, and lens soiling detection. We demonstrate that the jointly trained model performs better than the respective single task versions. Our multi-task model has a shared encoder providing a significant computational advantage and has synergized decoders where tasks support each other. We propose a novel camera geometry based adaptation mechanism to encode the fisheye distortion model both at training and inference. This was crucial to enable training on the WoodScape dataset, comprised of data from different parts of the world collected by 12 different cameras mounted on three different cars with different intrinsics and viewpoints. Given that bounding boxes is not a good representation for distorted fisheye images, we also extend object detection to use a polygon with non-uniformly sampled vertices. We additionally evaluate our model on standard automotive datasets, namely KITTI and Cityscapes. We obtain the state-of-the-art results on KITTI for depth estimation and pose estimation tasks and competitive performance on the other tasks. We perform extensive ablation studies on various architecture choices and task weighting methodologies. A short video at https://youtu.be/xbSjZ5OfPes provides qualitative results.
引用
收藏
页码:2830 / 2837
页数:8
相关论文
共 50 条
  • [1] Adversarial Attacks on Multi-task Visual Perception for Autonomous Driving
    Sobh, Ibrahim
    Hamed, Ahmed
    Kumar, Varun Ravi
    Yogamani, Senthil
    JOURNAL OF IMAGING SCIENCE AND TECHNOLOGY, 2021, 65 (06)
  • [2] A Multi-Task Network Based on Dual-Neck Structure for Autonomous Driving Perception
    Tan, Guopeng
    Wang, Chao
    Li, Zhihua
    Zhang, Yuanbiao
    Li, Ruikai
    SENSORS, 2024, 24 (05)
  • [3] Disentangling and Vectorization: A 3D Visual Perception Approach for Autonomous Driving Based on Surround-View Fisheye Cameras
    Wu, Zizhang
    Zhang, Wenkai
    Wang, Jizheng
    Wang, Man
    Gan, Yuanzhu
    Gou, Xinchao
    Fang, Muqing
    Song, Jing
    2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 5576 - 5582
  • [4] Multi-Task Environmental Perception Methods for Autonomous Driving
    Liu, Ri
    Yang, Shubin
    Tang, Wansha
    Yuan, Jie
    Chan, Qiqing
    Yang, Yunchuan
    SENSORS, 2024, 24 (17)
  • [5] Multi-task perception algorithm of autonomous driving based on temporal fusion
    Liu Z.-W.
    Fan S.-H.
    Qi M.-Y.
    Dong M.
    Wang P.
    Zhao X.-M.
    Jiaotong Yunshu Gongcheng Xuebao/Journal of Traffic and Transportation Engineering, 2021, 21 (04): : 223 - 234
  • [6] Real-Time Multi-task Network for Autonomous Driving
    Dat, Vu Thanh
    Bao, Ngo Viet Hoai
    Hung, Phan Duy
    ADVANCES IN COMPUTING AND DATA SCIENCES (ICACDS 2022), PT I, 2022, 1613 : 207 - 218
  • [7] LiDAR-Based Multi-Task Road Perception Network for Autonomous Vehicles
    Yan, Fuwu
    Wang, Kewei
    Zou, Bin
    Tang, Luqi
    Li, Wenbo
    Lv, Chen
    IEEE ACCESS, 2020, 8 : 86753 - 86764
  • [8] GDMNet: A Unified Multi-Task Network for Panoptic Driving Perception
    Liu, Yunxiang
    Ma, Haili
    Zhu, Jianlin
    Zhang, Qiangbo
    CMC-COMPUTERS MATERIALS & CONTINUA, 2024, 80 (02): : 2963 - 2978
  • [9] Autonomous Driving Multi-Task Perception Algorithm Based on Receptive-Field Attention Convolution
    Liu, Yunxiang
    Ma, Haili
    Zhu, Jianlin
    Zhang, Qing
    Jin, Qi
    Computer Engineering and Applications, 2024, 60 (20) : 133 - 141
  • [10] Attention-Based Deep Driving Model for Autonomous Vehicles with Surround-View Cameras
    Zhao, Yang
    Li, Jie
    Huang, Rui
    Li, Boqi
    Luo, Ao
    Li, Yaochen
    Cheng, Hong
    2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 286 - 292