Unsupervised Class-Agnostic Instance Segmentation of 3D LiDAR Data for Autonomous Vehicles

被引:6
|
作者
Nunes, Lucas [1 ]
Chen, Xieyuanli [1 ]
Marcuzzi, Rodrigo [1 ]
Osep, Aljosa [2 ]
Leal-Taixe, Laura [2 ]
Stachniss, Cyrill [1 ]
Behley, Jens [1 ]
机构
[1] Univ Bonn, D-53115 Bonn, Germany
[2] Tech Univ Munich, D-30772 Munich, Germany
来源
IEEE ROBOTICS AND AUTOMATION LETTERS | 2022年 / 7卷 / 04期
关键词
Semantic Scene Understanding; Deep Learning Methods;
D O I
10.1109/LRA.2022.3187872
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Fine-grained scene understanding is essential for autonomous driving. The context around a vehicle can change drastically while navigating, making it hard to identify and understand the different objects that may appear. Although recent efforts on semantic and panoptic segmentation pushed the field of scene understanding forward, it is still a challenging task. Current methods depend on annotations provided before deployment and are bound by the labeled classes, ignoring long-tailed classes not annotated in the training data due to the scarcity of examples. However, those long-tailed classes, such as baby strollers or unknown animals, can be crucial when interpreting the vehicle surroundings, e.g., for safe interaction. We address the problem of class-agnostic instance segmentation in this paper that also tackles the long-tailed classes. We propose a novel approach and a benchmark for class-agnostic instance segmentation and a thorough evaluation of our method on real-world data. Our method relies on a self-supervised trained network to extract point-wise features to build a graph representation of the point cloud. Then, we use GraphCut to perform foreground and background separation, achieving instance segmentation without requiring any label. Our results show that our approach is able to achieve instance segmentation and a competitive performance compared to state-of-the-art supervised methods.
引用
收藏
页码:8713 / 8720
页数:8
相关论文
共 50 条
  • [21] Integrated Object Segmentation and Tracking for 3D LIDAR Data
    Tuncer, Mehmet Ali Cagri
    Schulz, Dirk
    ICINCO: PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON INFORMATICS IN CONTROL, AUTOMATION AND ROBOTICS, VOL 2, 2016, : 344 - 351
  • [22] RGB and LiDAR fusion based 3D Semantic Segmentation for Autonomous Driving
    El Madawi, Khaled
    Rashed, Hazem
    El Sallab, Ahmad
    Nasr, Omar
    Kamel, Hanan
    Yogamani, Senthil
    2019 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC), 2019, : 7 - 12
  • [23] Few-shot 3D LiDAR Semantic Segmentation for Autonomous Driving
    Mei, Jilin
    Zhou, Junbao
    Hu, Yu
    2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2023), 2023, : 9324 - 9330
  • [24] Class-Balanced PolarMix for Data Augmentation of 3D LIDAR Point Clouds Semantic Segmentation
    Liu, Bo
    Qi, Xiao
    JOURNAL OF INTERNET TECHNOLOGY, 2025, 26 (01): : 65 - 75
  • [25] Unsupervised segmentation of 3D and 2D seismic reflection data
    Köster, K
    Spann, M
    VISION INTERFACE - REAL WORLD APPLICATIONS OF COMPUTER VISION, 1999, 35 : 57 - 77
  • [26] Unsupervised segmentation of 3D and 2D seismic reflection data
    Köster, K
    Spann, M
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 1999, 13 (05) : 643 - 663
  • [27] DualGroup for 3D instance and panoptic segmentation
    Zhao, Lin
    Chen, Sijia
    Tang, Xu
    Tao, Wenbing
    PATTERN RECOGNITION LETTERS, 2024, 185 : 124 - 129
  • [28] Hierarchical Aggregation for 3D Instance Segmentation
    Chen, Shaoyu
    Fang, Jiemin
    Zhang, Qian
    Liu, Wenyu
    Wang, Xinggang
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 15447 - 15456
  • [29] Enhanced Obstacle Detection in Autonomous Vehicles Using 3D LiDAR Mapping Techniques
    Tokgoz, Muhammed Enes
    Yusefi, Abdullah
    Toy, Ibrahim
    Durdu, Akif
    2024 23RD INTERNATIONAL SYMPOSIUM INFOTEH-JAHORINA, INFOTEH, 2024,
  • [30] A data-centric unsupervised 3D mesh segmentation method
    Sivri, Talya Tumer
    Sahillioglu, Yusuf
    VISUAL COMPUTER, 2024, 40 (04): : 2237 - 2249