PI-RCNN: An Efficient Multi-Sensor 3D Object Detector with Point-Based Attentive Cont-Conv Fusion Module

被引:0
|
作者
Xie, Liang [1 ,2 ]
Xiang, Chao [1 ]
Yu, Zhengxu [1 ]
Xu, Guodong [1 ,2 ]
Yang, Zheng [2 ]
Cai, Deng [1 ,3 ]
He, Xiaofei [1 ,2 ]
机构
[1] Zhejiang Univ, State Key Lab CAD&CG, Hangzhou, Peoples R China
[2] Fabu Inc, Hangzhou, Peoples R China
[3] Alibaba Zhejiang Univ Joint Inst Frontier Technol, Hangzhou, Peoples R China
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
LIDAR point clouds and RGB-images are both extremely essential for 3D object detection. So many state-of-the-art 3D detection algorithms dedicate in fusing these two types of data effectively. However, their fusion methods based on Bird's Eye View (BEV) or voxel format are not accurate. In this paper, we propose a novel fusion approach named Point-based Attentive Cont-conv Fusion(PACF) module, which fuses multi-sensor features directly on 3D points. Except for continuous convolution, we additionally add a Point-Pooling and an Attentive Aggregation to make the fused features more expressive. Moreover, based on the PACF module, we propose a 3D multi-sensor multi-task network called Pointcloud-Image RCNN(PI-RCNN as brief), which handles the image segmentation and 3D object detection tasks. PI-RCNN employs a segmentation sub-network to extract full-resolution semantic feature maps from images and then fuses the multi-sensor features via powerful PACF module. Beneficial from the effectiveness of the PACF module and the expressive semantic features from the segmentation module, PI-RCNN can improve much in 3D object detection. We demonstrate the effectiveness of the PACF module and PI-RCNN on the KITTI 3D Detection benchmark, and our method can achieve state-of-the-art on the metric of 3D AP.
引用
收藏
页码:12460 / 12467
页数:8
相关论文
共 43 条
  • [1] 3D Point Cloud Generation Based on Multi-Sensor Fusion
    Han, Yulong
    Sun, Haili
    Lu, Yue
    Zhong, Ruofei
    Ji, Changqi
    Xie, Si
    APPLIED SCIENCES-BASEL, 2022, 12 (19):
  • [2] Multi-Sensor Fusion 3D Object Detection Based on Multi-Frame Information
    Wu S.
    Geng J.
    Wu C.
    Yan Z.
    Chen K.
    Beijing Ligong Daxue Xuebao/Transaction of Beijing Institute of Technology, 2023, 43 (12): : 1282 - 1289
  • [3] Multi-Task Multi-Sensor Fusion for 3D Object Detection
    Liang, Ming
    Yang, Bin
    Chen, Yun
    Hu, Rui
    Urtasun, Raquel
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 7337 - 7345
  • [4] Deep Continuous Fusion for Multi-sensor 3D Object Detection
    Liang, Ming
    Yang, Bin
    Wang, Shenlong
    Urtasun, Raquel
    COMPUTER VISION - ECCV 2018, PT XVI, 2018, 11220 : 663 - 678
  • [5] A dynamic object removing 3D reconstruction system based on multi-sensor fusion
    Zhao, Chenxi
    Liu, Zeliang
    Pan, Zihao
    Yu, Lei
    MEASUREMENT SCIENCE AND TECHNOLOGY, 2024, 35 (10)
  • [6] 3DSSD: Point-based 3D Single Stage Object Detector
    Yang, Zetong
    Sun, Yanan
    Liu, Shu
    Jia, Jiaya
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, : 11037 - 11045
  • [7] Fast All-day 3D Object Detection Based on Multi-sensor Fusion
    Xiao, Liang
    Zhu, Qi
    Chen, Tongtong
    Zhao, Dawei
    Shang, Erke
    Nie, Yiming
    2023 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI, 2023, : 71 - 73
  • [8] AFTR: A Robustness Multi-Sensor Fusion Model for 3D Object Detection Based on Adaptive Fusion Transformer
    Zhang, Yan
    Liu, Kang
    Bao, Hong
    Qian, Xu
    Wang, Zihan
    Ye, Shiqing
    Wang, Weicen
    SENSORS, 2023, 23 (20)
  • [9] Multi-sensor point cloud data fusion for precise 3D mapping
    Abdelazeem, Mohamed
    Elamin, Ahmed
    Afifi, Akram
    El-Rabbany, Ahmed
    EGYPTIAN JOURNAL OF REMOTE SENSING AND SPACE SCIENCES, 2021, 24 (03): : 835 - 844
  • [10] Multi-sensor point cloud data fusion for precise 3D mapping
    Abdelazeem, Mohamed
    Elamin, Ahmed
    Afifi, Akram
    El-Rabbany, Ahmed
    Egyptian Journal of Remote Sensing and Space Science, 2021, 24 (03): : 835 - 844