Sparse Activation Maps for Interpreting 3D Object Detection

被引:6
|
作者
Chen, Qiuxiao [1 ]
Li, Pengfei [2 ]
Xu, Meng [1 ]
Qi, Xiaojun [1 ]
机构
[1] Utah State Univ, Logan, UT 84322 USA
[2] Univ Calif Riverside, Riverside, CA 92521 USA
关键词
D O I
10.1109/CVPRW53098.2021.00017
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose a technique to generate "visual explanations" for interpretability of volumetric-based 3D object detection networks. Specifically, we use the average pooling of weights to produce a Sparse Activation Map (SAM) which highlights the important regions of the 3D point cloud data. The SAMs is applicable to any volumetric-based models (model agnostic) to provide intuitive intermediate results at different layers to understand the complex network structures. The SAMs at the 3D feature map layer and the 2D feature map layer help to understand the effectiveness of neurons to capture the object information. The SAMs at the classification layer for each object class helps to understand the true positives and false positives of each network. The experimental results on the KITTI dataset demonstrate the visual observations of the SAM match the detection results of three volumetric-based models.
引用
收藏
页码:76 / 84
页数:9
相关论文
共 50 条
  • [1] Super Sparse 3D Object Detection
    Fan, Lue
    Yang, Yuxue
    Wang, Feng
    Wang, Naiyan
    Zhang, Zhaoxiang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (10) : 12490 - 12505
  • [2] Fully Sparse 3D Object Detection
    Fan, Lue
    Wang, Feng
    Wang, Naiyan
    Zhang, Zhaoxiang
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [3] DROP SPARSE CONVOLUTION FOR 3D OBJECT DETECTION
    Zhu, Taohong
    Shen, Jun
    Wang, Chali
    Xiong, Huiyuan
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 3185 - 3189
  • [4] Sparse Dense Fusion for 3D Object Detection
    Gao, Yulu
    Sima, Chonghao
    Shi, Shaoshuai
    Di, Shangzhe
    Liu, Si
    Li, Hongyang
    2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2023, : 10939 - 10946
  • [5] Fully Sparse Fusion for 3D Object Detection
    Li, Yingyan
    Fan, Lue
    Liu, Yang
    Huang, Zehao
    Chen, Yuntao
    Wang, Naiyan
    Zhang, Zhaoxiang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (11) : 7217 - 7231
  • [6] Virtual Sparse Convolution for Multimodal 3D Object Detection
    Wu, Hai
    Wen, Chenglu
    Shi, Shaoshuai
    Li, Xin
    Wang, Cheng
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 21653 - 21662
  • [7] Focal Sparse Convolutional Networks for 3D Object Detection
    Chen, Yukang
    Li, Yanwei
    Zhang, Xiangyu
    Sun, Jian
    Jia, Jiaya
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 5418 - 5427
  • [8] Building Semantic Object Maps from Sparse and Noisy 3D Data
    Guenther, Martin
    Wiemann, Thomas
    Albrecht, Sven
    Hertzberg, Joachim
    2013 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2013, : 2228 - 2233
  • [9] VoxelNeXt: Fully Sparse VoxelNet for 3D Object Detection and Tracking
    Chen, Yukang
    Liu, Jianhui
    Zhang, Xiangyu
    Qi, Xiaojuan
    Jia, Jiaya
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 21674 - 21683
  • [10] Spatial Pruned Sparse Convolution for Efficient 3D Object Detection
    Liu, Jianhui
    Chen, Yukang
    Ye, Xiaoqing
    Tian, Zhuotao
    Tan, Xiao
    Qi, Xiaojuan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,