Deep Multi-Modal Object Detection and Semantic Segmentation for Autonomous Driving: Datasets, Methods, and Challenges

被引:705
|
作者
Feng, Di [1 ,2 ]
Haase-Schutz, Christian [3 ,4 ]
Rosenbaum, Lars [1 ]
Hertlein, Heinz [3 ]
Glaser, Claudius [1 ]
Timm, Fabian [1 ]
Wiesbeck, Werner [4 ]
Dietmayer, Klaus [2 ]
机构
[1] Robert Bosch GmbH, Corp Res, Driver Assistance Syst & Automated Driving, D-71272 Renningen, Germany
[2] Ulm Univ, Inst Measurement Control & Microtechnol, D-89081 Ulm, Germany
[3] Robert Bosch GmbH, Chassis Syst Control, Engn Cognit Syst, Automated Driving, D-74232 Abstatt, Germany
[4] Karlsruhe Inst Technol, Inst Radio Frequency Engn & Elect, D-76131 Karlsruhe, Germany
关键词
Multi-modality; object detection; semantic segmentation; deep learning; autonomous driving; NEURAL-NETWORKS; ROAD; FUSION; LIDAR; ENVIRONMENTS; SET;
D O I
10.1109/TITS.2020.2972974
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Recent advancements in perception for autonomous driving are driven by deep learning. In order to achieve robust and accurate scene understanding, autonomous vehicles are usually equipped with different sensors (e.g. cameras, LiDARs, Radars), and multiple sensing modalities can be fused to exploit their complementary properties. In this context, many methods have been proposed for deep multi-modal perception problems. However, there is no general guideline for network architecture design, and questions of "what to fuse", "when to fuse", and "how to fuse" remain open. This review paper attempts to systematically summarize methodologies and discuss challenges for deep multi-modal object detection and semantic segmentation in autonomous driving. To this end, we first provide an overview of on-board sensors on test vehicles, open datasets, and background information for object detection and semantic segmentation in autonomous driving research. We then summarize the fusion methodologies and discuss challenges and open questions. In the appendix, we provide tables that summarize topics and methods. We also provide an interactive online platform to navigate each reference: https://boschresearch.github.io/multimodalperception/.
引用
收藏
页码:1341 / 1360
页数:20
相关论文
共 50 条
  • [21] Probabilistic 3D Multi-Modal, Multi-Object Tracking for Autonomous Driving
    Chiu, Hsu-kuang
    Lie, Jie
    Ambrus, Rares
    Bohg, Jeannette
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 14227 - 14233
  • [22] A Multi-modal Moving Object Detection Method Based on GrowCut Segmentation
    Zhang, Xiuwei
    Zhang, Yanning
    Maybank, Stephen John
    Liang, Jun
    2014 IEEE SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE FOR MULTIMEDIA, SIGNAL AND VISION PROCESSING (CIMSIVP), 2014, : 213 - 218
  • [23] Multi-Modal Sensor Fusion-Based Semantic Segmentation for Snow Driving Scenarios
    Vachmanus, Sirawich
    Ravankar, Ankit A.
    Emaru, Takanori
    Kobayashi, Yukinori
    IEEE SENSORS JOURNAL, 2021, 21 (15) : 16839 - 16851
  • [24] Deep learning and multi-modal fusion for real-time multi-object tracking: Algorithms, challenges, datasets, and comparative study
    Wang, Xuan
    Sun, Zhaojie
    Chehri, Abdellah
    Jeon, Gwanggil
    Song, Yongchao
    INFORMATION FUSION, 2024, 105
  • [25] Collaborative Perception in Autonomous Driving: Methods, Datasets, and Challenges
    Han, Yushan
    Zhang, Hui
    Li, Huifang
    Jin, Yi
    Lang, Congyan
    Li, Yidong
    IEEE INTELLIGENT TRANSPORTATION SYSTEMS MAGAZINE, 2023, 15 (06) : 131 - 151
  • [26] Pseudo Multi-Modal Approach to LiDAR Semantic Segmentation
    Kim, Kyungmin
    SENSORS, 2024, 24 (23)
  • [27] A Quantitative Validation of Multi-Modal Image Fusion and Segmentation for Object Detection and Tracking
    LaHaye, Nicholas
    Garay, Michael J.
    Bue, Brian D.
    El-Askary, Hesham
    Linstead, Erik
    REMOTE SENSING, 2021, 13 (12)
  • [28] MULTI-MODAL SEMANTIC MESH SEGMENTATION IN URBAN SCENES
    Laupheimer, Dominik
    Haala, Norbert
    XXIV ISPRS CONGRESS IMAGING TODAY, FORESEEING TOMORROW, COMMISSION II, 2022, 5-2 : 267 - 274
  • [29] Deep Object Tracking with Multi-modal Data
    Zhang, Xuezhi
    Yuan, Yuan
    Lu, Xiaoqiang
    2016 INTERNATIONAL CONFERENCE ON COMPUTER, INFORMATION AND TELECOMMUNICATION SYSTEMS (CITS), 2016, : 161 - 165
  • [30] Multi-modal tumor segmentation methods based on deep learning: a narrative review
    Xue, Hengzhi
    Yao, Yudong
    Teng, Yueyang
    QUANTITATIVE IMAGING IN MEDICINE AND SURGERY, 2024, 14 (01) : 1122 - 1140