Adaptive Patchwork: Real-Time Ground Segmentation for 3D Point Cloud With Adaptive Partitioning and Spatial-Temporal Context

被引:0
|
作者
Ren, Hao [1 ]
Wang, Mingwei [1 ]
Li, Wenpeng [2 ]
Liu, Chen [2 ]
Zhang, Mengli [1 ]
机构
[1] Shaanxi Univ Sci & Technol, Shaanxi Joint Lab Artificial Intelligence, Xian 710021, Peoples R China
[2] Deyi Intelligent Technol, Xian 710076, Peoples R China
关键词
Range sensing; mapping; object detection; segmentation and categorization; ground segmentation;
D O I
10.1109/LRA.2023.3316089
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Ground segmentation is a fundamental task in the field of 3D perception using 3D LiDAR sensors. Several ground segmentation methods have been proposed, but they often suffer from mis-segmentation, especially missed detection, due to poor noise removal, unreasonable ground pre-blocking and lack of rectification for segmentation results. To address these issues, a ground segmentation algorithm called Adaptive Patchwork is proposed, which is an extension of Patchwork++. Adaptive Patchwork adopts variable adaptive partitions instead of full a priori partitions, which alleviates the problem caused by point cloud sparsity. Secondly, based on the spatio-temporal correlation of ground point clouds, spatial and temporal correction is proposed, which integrates the spatio-temporal information of the ground with a small cost, significantly reducing missed and false detections. Finally, using the relationship between point cloud intensity and object reflectivity, noise removal based on multi-form features is proposed. The noise that affects ground segmentation is removed by using various point cloud features such as intensity, height and point cloud position prior. Validated experimentally on SemanticKITTI and Koblenz datasets, Adaptive Patchwork achieves satisfactory performance compared to state-of-the-art methods and is fast compared to existing plane fitting based methods.
引用
收藏
页码:7162 / 7169
页数:8
相关论文
共 50 条
  • [41] RobNet: real-time road-object 3D point cloud segmentation based on SqueezeNet and cyclic CRF
    Sun, Wei
    Zhang, Zhenhao
    Huang, Jie
    SOFT COMPUTING, 2020, 24 (08) : 5805 - 5818
  • [42] Adaptive Precision Real-Time 3D Single Particle Tracking Microscopy
    Hou, Shangguo
    Welsher, Kevin
    BIOPHYSICAL JOURNAL, 2018, 114 (03) : 166A - 166A
  • [43] Real-time 3D video imaging with adaptive phase unwrapping method
    Xiang, Wang
    DIMENSIONAL OPTICAL METROLOGY AND INSPECTION FOR PRACTICAL APPLICATIONS XI, 2022, 12098
  • [44] RETROFIT: Real-Time Control of Time-Dependent 3D Point Cloud Profiles
    Biehler M.
    Shi J.
    Journal of Manufacturing Science and Engineering, 2024, 146 (06):
  • [45] Real Time Volume Measurement of Logistics Cartons Through 3D Point Cloud Segmentation
    Yan, Wu
    Xu, Chen
    Wu, Hongmin
    Li, Shuai
    Zhou, Xuefeng
    INTELLIGENT ROBOTICS AND APPLICATIONS, ICIRA 2021, PT III, 2021, 13015 : 324 - 335
  • [46] Exploring Spatial Context for 3D Semantic Segmentation of Point Clouds
    Engelmann, Francis
    Kontogianni, Theodora
    Hermans, Alexander
    Leibe, Bastian
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2017), 2017, : 716 - 724
  • [47] Near Real-Time 3D Reconstruction and Quality 3D Point Cloud for Time-Critical Construction Monitoring
    Liu, Zuguang
    Kim, Daeho
    Lee, Sanghyun
    Zhou, Li
    An, Xuehui
    Liu, Meiyin
    BUILDINGS, 2023, 13 (02)
  • [48] The Ground Segmentation of 3D LIDAR Point Cloud with the Optimized Region Merging
    Na, Kiin
    Byun, Jaemin
    Roh, Myongchan
    Seo, Bumsu
    2013 INTERNATIONAL CONFERENCE ON CONNECTED VEHICLES AND EXPO (ICCVE), 2013, : 445 - 450
  • [49] AIFormer: Adaptive Interaction Transformer for 3D Point Cloud Understanding
    Chu, Xutao
    Zhao, Shengjie
    Dai, Hongwei
    REMOTE SENSING, 2024, 16 (21)
  • [50] Real-Time 3-D Segmentation on An Autonomous Embedded System: using Point Cloud and Camera
    Katare, Dewant
    El-Sharkawy, Mohamed
    PROCEEDINGS OF THE 2019 IEEE NATIONAL AEROSPACE AND ELECTRONICS CONFERENCE (NAECON), 2019, : 356 - 361