Multi-Modality Sensing and Data Fusion for Multi-Vehicle Detection

被引:30
|
作者
Roy, Debashri [1 ]
Li, Yuanyuan [1 ]
Jian, Tong [1 ]
Tian, Peng [1 ]
Chowdhury, Kaushik [1 ]
Ioannidis, Stratis [1 ]
机构
[1] Northeastern Univ, Dept Elect, Comp Engn, Boston, MA 02115 USA
基金
美国国家科学基金会;
关键词
Vehicle detection; tracking; multimodal data; fusion; latent embeddings; image; seismic; acoustic; radar; CHALLENGES; TRACKING;
D O I
10.1109/TMM.2022.3145663
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the recent surge in autonomous driving vehicles, the need for accurate vehicle detection and tracking is critical now more than ever. Detecting vehicles from visual sensors fails in non-line-of-sight (NLOS) settings. This can be compensated by the inclusion of other modalities in a multi-domain sensing environment. We propose several deep learning based frameworks for fusing different modalities (image, radar, acoustic, seismic) through the exploitation of complementary latent embeddings, incorporating multiple state-of-the-art fusion strategies. Our proposed fusion frameworks considerably outperform unimodal detection. Moreover, fusion between image and non-image modalities improves vehicle tracking and detection under NLOS conditions. We validate our models on the real-world multimodal ESCAPE dataset, showing 33.16% improvement in vehicle detection by fusion (over visual inference alone) over test scenarios with 30-42% NLOS conditions. To demonstrate how well our framework generalizes, we also validate our models on the multimodal NuScene dataset, showing similar to 22% improvement over competing methods.
引用
收藏
页码:2280 / 2295
页数:16
相关论文
共 50 条
  • [1] MMYFnet: Multi-Modality YOLO Fusion Network for Object Detection in Remote Sensing Images
    Guo, Huinan
    Sun, Congying
    Zhang, Jing
    Zhang, Wuxia
    Zhang, Nengshuang
    REMOTE SENSING, 2024, 16 (23)
  • [2] Navigating an Automated Driving Vehicle via the Early Fusion of Multi-Modality
    Haris, Malik
    Glowacz, Adam
    SENSORS, 2022, 22 (04)
  • [3] Equivariant Multi-Modality Image Fusion
    Zhao, Zixiang
    Hai, Haowen
    Zhang, Jiangshe
    Zhang, Yulun
    Zhane, Kai
    Xu, Shuang
    Chen, Dongdong
    Timofte, Radu
    Van Gool, Luc
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 25912 - 25921
  • [4] Multi-Modality Tensor Fusion Based Human Fatigue Detection
    Ha, Jongwoo
    Ryu, Joonhyuck
    Ko, Joonghoon
    ELECTRONICS, 2023, 12 (15)
  • [5] Multi-Vehicle Decentralized Fusion and Tracking
    El-Fallah, A.
    Zatezalo, A.
    Mahler, R.
    Mehra, R. K.
    SIGNAL PROCESSING, SENSOR FUSION, AND TARGET RECOGNITION XXI, 2012, 8392
  • [6] Deep learning supported disease detection with multi-modality image fusion
    Vinnarasi, F. Sangeetha Francelin
    Daniel, Jesline
    Rose, J. T. Anita
    Pugalenthi, R.
    JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY, 2021, 29 (03) : 411 - 434
  • [7] CROSS-MEDIA TOPIC DETECTION: A MULTI-MODALITY FUSION FRAMEWORK
    Zhang, Yanyan
    Li, Guorong
    Chu, Lingyang
    Wang, Shuhui
    Zhang, Weigang
    Huang, Qingming
    2013 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME 2013), 2013,
  • [8] Multi-modality Fusion Network for Action Recognition
    Huang, Kai
    Qin, Zheng
    Xu, Kaiping
    Ye, Shuxiong
    Wang, Guolong
    ADVANCES IN MULTIMEDIA INFORMATION PROCESSING - PCM 2017, PT II, 2018, 10736 : 139 - 149
  • [9] Multi-Modality Image Fusion and Object Detection Based on Semantic Information
    Liu, Yong
    Zhou, Xin
    Zhong, Wei
    ENTROPY, 2023, 25 (05)
  • [10] A Survey of Data Representation for Multi-Modality Event Detection and Evolution
    Xiao, Kejing
    Qian, Zhaopeng
    Qin, Biao
    APPLIED SCIENCES-BASEL, 2022, 12 (04):