FocalFormer3D: Focusing on Hard Instance for 3D Object Detection

被引:23
|
作者
Chen, Yilun [1 ]
Yu, Zhiding [3 ]
Chen, Yukang [1 ]
Lan, Shiyi [3 ]
Anandkumar, Anima [2 ,3 ]
Jia, Jiaya [1 ]
Alvarez, Jose M.
机构
[1] Chinese Univ Hong Kong, Hong Kong, Peoples R China
[2] CALTECH, Pasadena, CA USA
[3] NVIDIA, Santa Clara, CA USA
关键词
D O I
10.1109/ICCV51070.2023.00771
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
False negatives (FN) in 3D object detection, e.g., missing predictions of pedestrians, vehicles, or other obstacles, can lead to potentially dangerous situations in autonomous driving. While being fatal, this issue is understudied in many current 3D detection methods. In this work, we propose Hard Instance Probing (HIP), a general pipeline that identifies FN in a multi- stage manner and guides the models to focus on excavating difficult instances. For 3D object detection, we instantiate this method as FocalFormer3D, a simple yet effective detector that excels at excavating difficult objects and improving prediction recall. FocalFormer3D features a multi-stage query generation to discover hard objects and a box-level transformer decoder to efficiently distinguish objects from massive object candidates. Experimental results on the nuScenes and Waymo datasets validate the superior performance of FocalFormer3D. The advantage leads to strong performance on both detection and tracking, in both LiDAR and multi-modal settings. Notably, FocalFormer3D achieves a 70.5 mAP and 73.9 NDS on nuScenes detection benchmark, while the nuScenes tracking benchmark shows 72.1 AMOTA, both ranking 1st place on the nuScenes LiDAR leaderboard. Our code is available at https: //github.com/NVlabs/FocalFormer3D.
引用
收藏
页码:8360 / 8371
页数:12
相关论文
共 50 条
  • [41] 3D sketching for 3D object retrieval
    Li, Bo
    Yuan, Juefei
    Ye, Yuxiang
    Lu, Yijuan
    Zhang, Chaoyang
    Tian, Qi
    MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (06) : 9569 - 9595
  • [42] 3D sketching for 3D object retrieval
    Bo Li
    Juefei Yuan
    Yuxiang Ye
    Yijuan Lu
    Chaoyang Zhang
    Qi Tian
    Multimedia Tools and Applications, 2021, 80 : 9569 - 9595
  • [43] Multimodal Object Query Initialization for 3D Object Detection
    van Geerenstein, Mathijs R.
    Ruppel, Felicia
    Dietmayers, Klaus
    Gavrila, Dariu M.
    2024 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2024), 2024, : 12484 - 12491
  • [44] 3D Object Proposals for Accurate Object Class Detection
    Chen, Xiaozhi
    Kundu, Kaustav
    Zhu, Yukun
    Berneshawi, Andrew
    Ma, Huimin
    Fidler, Sanja
    Urtasun, Raquel
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 28 (NIPS 2015), 2015, 28
  • [45] Reinforcing LiDAR-Based 3D Object Detection with RGB and 3D Information
    Liu, Wenjian
    Zhou, Yue
    NEURAL INFORMATION PROCESSING (ICONIP 2019), PT II, 2019, 11954 : 199 - 209
  • [46] MonoSample: Synthetic 3D Data Augmentation Method in Monocular 3D Object Detection
    Qiao, Junchao
    Liu, Biao
    Yang, Jiaqi
    Wang, Baohua
    Xiu, Sanmu
    Du, Xin
    Nie, Xiaobo
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (08): : 7326 - 7332
  • [47] SGM3D: Stereo Guided Monocular 3D Object Detection
    Zhou, Zheyuan
    Du, Liang
    Ye, Xiaoqing
    Zou, Zhikang
    Tan, Xiao
    Zhang, Li
    Xue, Xiangyang
    Feng, Jianfeng
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (04) : 10478 - 10485
  • [48] KPP3D:Key Point Painting for 3D Object Detection
    Wang, Mingming
    Chen, Qingkui
    Fu, Zhibing
    Computer Engineering and Applications, 2023, 59 (17) : 195 - 204
  • [49] Multiview 3D Object Detection Based on Improved DETR3D
    Zhang, Yuhan
    Huang, Miaohua
    Chen, Gengyao
    Li, Yanzhou
    Wu, Yiming
    LASER & OPTOELECTRONICS PROGRESS, 2025, 62 (02)
  • [50] RoadSense3D: A Framework for Roadside Monocular 3D Object Detection
    Carta, Salvatore
    Castrillon-Santana, Modesto
    Marras, Mirko
    Mohamed, Sondos
    Podda, Alessandro Sebastian
    Saia, Roberto
    Sau, Marco
    Zimmer, Walter
    ADJUNCT PROCEEDINGS OF THE 32ND ACM CONFERENCE ON USER MODELING, ADAPTATION AND PERSONALIZATION, UMAP 2024, 2024, : 452 - 459