SparseDet: A Simple and Effective Framework for Fully Sparse LiDAR-Based 3-D Object Detection

被引:1
|
作者
Liu, Lin [1 ]
Song, Ziying [1 ]
Xia, Qiming [2 ]
Jia, Feiyang [1 ]
Jia, Caiyan [1 ]
Yang, Lei [3 ,4 ]
Gong, Yan [5 ]
Pan, Hongyu [6 ]
机构
[1] Beijing Jiaotong Univ, Sch Comp Sci & Technol, Beijing Key Lab Traff Data Anal & Min, Beijing 100044, Peoples R China
[2] Xiamen Univ, Fujian Key Lab Sensing & Comp Smart Cities, Xiamen 361005, Fujian, Peoples R China
[3] Tsinghua Univ, State Key Lab Intelligent Green Vehicle & Mobil, Beijing 100084, Peoples R China
[4] Tsinghua Univ, Sch Vehicle & Mobil, Beijing 100084, Peoples R China
[5] JD Logist, Autonomous Driving Dept X Div, Beijing 101111, Peoples R China
[6] Horizon Robot, Beijing 100190, Peoples R China
关键词
Feature extraction; Three-dimensional displays; Point cloud compression; Detectors; Aggregates; Object detection; Computational efficiency; 3-D object detection; feature aggregation; sparse detectors;
D O I
10.1109/TGRS.2024.3468394
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
LiDAR-based sparse 3-D object detection plays a crucial role in autonomous driving applications due to its computational efficiency advantages. Existing methods either use the features of a single central voxel as an object proxy or treat an aggregated cluster of foreground points as an object proxy. However, the former cannot aggregate contextual information, resulting in insufficient information expression in object proxies. The latter relies on multistage pipelines and auxiliary tasks, which reduce the inference speed. To maintain the efficiency of the sparse framework while fully aggregating contextual information, in this work, we propose SparseDet that designs sparse queries as object proxies. It introduces two key modules: the local multiscale feature aggregation (LMFA) module and the global feature aggregation (GFA) module, aiming to fully capture the contextual information, thereby enhancing the ability of the proxies to represent objects. The LMFA module achieves feature fusion across different scales for sparse key voxels via coordinate transformations and using nearest neighbor relationships to capture object-level details and local contextual information, whereas the GFA module uses self-attention mechanisms to selectively aggregate the features of the key voxels across the entire scene for capturing scene-level contextual information. Experiments on nuScenes and KITTI demonstrate the effectiveness of our method. Specifically, SparseDet surpasses the previous best sparse detector VoxelNeXt (a typical method using voxels as object proxies) by 2.2% mean average precision (mAP) with 13.5 frames/s on nuScenes and outperforms VoxelNeXt by 1.12% AP(3-D) on hard level tasks with 17.9 frames/s on KITTI. What is more, not only the mAP of SparseDet exceeds that of FSDV2 (a classical method using clusters of foreground points as object proxies) but also its inference speed is 1.3 times faster than FSDV2 on the nuScenes test set. The code has been released in https://github.com/liulin813/SparseDet.git.
引用
收藏
页数:14
相关论文
共 50 条
  • [41] ACDet: Attentive Cross-view Fusion for LiDAR-based 3D Object Detection
    Xu, Jiaolong
    Wang, Guojun
    Zhang, Xiao
    Wan, Guowei
    2022 INTERNATIONAL CONFERENCE ON 3D VISION, 3DV, 2022, : 74 - 83
  • [42] SARPNET: Shape attention regional proposal network for liDAR-based 3D object detection
    Ye, Yangyang
    Chen, Houjin
    Zhang, Chi
    Hao, Xiaoli
    Zhang, Zhaoxiang
    NEUROCOMPUTING, 2020, 379 : 53 - 63
  • [43] Cost-Aware Evaluation and Model Scaling for LiDAR-Based 3D Object Detection
    Wang, Xiaofang
    Kitani, Kris M.
    2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2023), 2023, : 9260 - 9266
  • [44] PLA-LiDAR: Physical Laser Attacks against LiDAR-based 3D Object Detection in Autonomous Vehicle
    Jin, Zizhi
    Ji, Xiaoyu
    Cheng, Yushi
    Yang, Bo
    Yan, Chen
    Xu, Wenyuan
    2023 IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP, 2023, : 1822 - 1839
  • [45] Incorporating Human Domain Knowledge in 3-D LiDAR-Based Semantic Segmentation
    Mei, Jilin
    Zhao, Huijing
    IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2020, 5 (02): : 178 - 187
  • [46] CMDA: Cross-Modal and Domain Adversarial Adaptation for LiDAR-Based 3D Object Detection
    Chang, Gyusam
    Roh, Wonseok
    Jang, Sujin
    Lee, Dongwook
    Ji, Daehyun
    Oh, Gyeongrok
    Park, Jinsun
    Kim, Jinkyu
    Kim, Sangpil
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 2, 2024, : 972 - 980
  • [47] Cross-Domain Generalization for LiDAR-Based 3D Object Detection in Infrastructure and Vehicle Environments
    Zhi, Peng
    Jiang, Longhao
    Yang, Xiao
    Wang, Xingzheng
    Li, Hung-Wei
    Zhou, Qingguo
    Li, Kuan-Ching
    Ivanovic, Mirjana
    SENSORS, 2025, 25 (03)
  • [48] A comprehensive survey of LIDAR-based 3D object detection methods with deep learning for autonomous driving
    Zamanakos, Georgios
    Tsochatzidis, Lazaros
    Amanatiadis, Angelos
    Pratikakis, Ioannis
    COMPUTERS & GRAPHICS-UK, 2021, 99 : 153 - 181
  • [49] RGB Image- and Lidar-Based 3D Object Detection Under Multiple Lighting Scenarios
    Wentao Chen
    Wei Tian
    Xiang Xie
    Wilhelm Stork
    Automotive Innovation, 2022, 5 : 251 - 259
  • [50] RGB Image- and Lidar-Based 3D Object Detection Under Multiple Lighting Scenarios
    Chen, Wentao
    Tian, Wei
    Xie, Xiang
    Stork, Wilhelm
    AUTOMOTIVE INNOVATION, 2022, 5 (03) : 251 - 259