SparseDet: A Simple and Effective Framework for Fully Sparse LiDAR-Based 3-D Object Detection

被引:1
|
作者
Liu, Lin [1 ]
Song, Ziying [1 ]
Xia, Qiming [2 ]
Jia, Feiyang [1 ]
Jia, Caiyan [1 ]
Yang, Lei [3 ,4 ]
Gong, Yan [5 ]
Pan, Hongyu [6 ]
机构
[1] Beijing Jiaotong Univ, Sch Comp Sci & Technol, Beijing Key Lab Traff Data Anal & Min, Beijing 100044, Peoples R China
[2] Xiamen Univ, Fujian Key Lab Sensing & Comp Smart Cities, Xiamen 361005, Fujian, Peoples R China
[3] Tsinghua Univ, State Key Lab Intelligent Green Vehicle & Mobil, Beijing 100084, Peoples R China
[4] Tsinghua Univ, Sch Vehicle & Mobil, Beijing 100084, Peoples R China
[5] JD Logist, Autonomous Driving Dept X Div, Beijing 101111, Peoples R China
[6] Horizon Robot, Beijing 100190, Peoples R China
关键词
Feature extraction; Three-dimensional displays; Point cloud compression; Detectors; Aggregates; Object detection; Computational efficiency; 3-D object detection; feature aggregation; sparse detectors;
D O I
10.1109/TGRS.2024.3468394
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
LiDAR-based sparse 3-D object detection plays a crucial role in autonomous driving applications due to its computational efficiency advantages. Existing methods either use the features of a single central voxel as an object proxy or treat an aggregated cluster of foreground points as an object proxy. However, the former cannot aggregate contextual information, resulting in insufficient information expression in object proxies. The latter relies on multistage pipelines and auxiliary tasks, which reduce the inference speed. To maintain the efficiency of the sparse framework while fully aggregating contextual information, in this work, we propose SparseDet that designs sparse queries as object proxies. It introduces two key modules: the local multiscale feature aggregation (LMFA) module and the global feature aggregation (GFA) module, aiming to fully capture the contextual information, thereby enhancing the ability of the proxies to represent objects. The LMFA module achieves feature fusion across different scales for sparse key voxels via coordinate transformations and using nearest neighbor relationships to capture object-level details and local contextual information, whereas the GFA module uses self-attention mechanisms to selectively aggregate the features of the key voxels across the entire scene for capturing scene-level contextual information. Experiments on nuScenes and KITTI demonstrate the effectiveness of our method. Specifically, SparseDet surpasses the previous best sparse detector VoxelNeXt (a typical method using voxels as object proxies) by 2.2% mean average precision (mAP) with 13.5 frames/s on nuScenes and outperforms VoxelNeXt by 1.12% AP(3-D) on hard level tasks with 17.9 frames/s on KITTI. What is more, not only the mAP of SparseDet exceeds that of FSDV2 (a classical method using clusters of foreground points as object proxies) but also its inference speed is 1.3 times faster than FSDV2 on the nuScenes test set. The code has been released in https://github.com/liulin813/SparseDet.git.
引用
收藏
页数:14
相关论文
共 50 条
  • [31] Fully Sparse Transformer 3-D Detector for LiDAR Point Cloud
    Zhang, Diankun
    Zheng, Zhijie
    Niu, Haoyu
    Wang, Xueqing
    Liu, Xiaojun
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61 : 1 - 12
  • [32] TEMPORAL AXIAL ATTENTION FOR LIDAR-BASED 3D OBJECT DETECTION IN AUTONOMOUS DRIVING
    Carranza-Garcia, Manuel
    Riquelme, Jose C.
    Zakhor, Avideh
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 201 - 205
  • [33] ProposalContrast: Unsupervised Pre-training for LiDAR-Based 3D Object Detection
    Yin, Junbo
    Zhou, Dingfu
    Zhang, Liangjun
    Fang, Jin
    Xu, Cheng-Zhong
    Shen, Jianbing
    Wang, Wenguan
    COMPUTER VISION, ECCV 2022, PT XXXIX, 2022, 13699 : 17 - 33
  • [34] A Versatile Multi-View Framework for LiDAR-based 3D Object Detection with Guidance from Panoptic Segmentation
    Fazlali, Hamidreza
    Xu, Yixuan
    Ren, Yuan
    Liu, Bingbing
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 17171 - 17180
  • [35] UAV Position Estimation using a LiDAR-based 3D Object Detection Method
    Olawoye, Uthman
    Gross, Jason N.
    2023 IEEE/ION POSITION, LOCATION AND NAVIGATION SYMPOSIUM, PLANS, 2023, : 46 - 51
  • [36] LiDAR-Based 3D Temporal Object Detection via Motion-Aware LiDAR Feature Fusion
    Park, Gyuhee
    Koh, Junho
    Kim, Jisong
    Moon, Jun
    Choi, Jun Won
    SENSORS, 2024, 24 (14)
  • [37] Fully Sparse Fusion for 3D Object Detection
    Li, Yingyan
    Fan, Lue
    Liu, Yang
    Huang, Zehao
    Chen, Yuntao
    Wang, Naiyan
    Zhang, Zhaoxiang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (11) : 7217 - 7231
  • [38] VALO: A Versatile Anytime Framework for LiDAR-Based Object Detection Deep Neural Networks
    Soyyigit, Ahmet
    Yao, Shuochao
    Yun, Heechul
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2024, 43 (11) : 4045 - 4056
  • [39] SMS-Net: Sparse multi-scale voxel feature aggregation network for LiDAR-based 3D object detection
    Liu, Sheng
    Huang, Wenhao
    Cao, Yifeng
    Li, Dingda
    Chen, Shengyong
    NEUROCOMPUTING, 2022, 501 : 555 - 565
  • [40] Stereo RGB and Deeper LIDAR-Based Network for 3D Object Detection in Autonomous Driving
    He, Qingdong
    Wang, Zhengning
    Zeng, Hao
    Zeng, Yi
    Liu, Yijun
    Liu, Shuaicheng
    Zeng, Bing
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (01) : 152 - 162