DIMENSIONALITY BASED SCALE SELECTION IN 3D LIDAR POINT CLOUDS

被引:0
|
作者
Demantke, Jerome [1 ]
Mallet, Clement [1 ]
David, Nicolas [1 ]
Vallet, Bruno [1 ]
机构
[1] Univ Paris Est, Lab MATIS, IGN, F-94165 St Mande, France
来源
ISPRS WORKSHOP LASER SCANNING 2011 | 2011年 / 38-5卷 / W12期
关键词
point cloud; adaptive neighborhood; scale selection; multi-scale analysis; feature; PCA; eigenvalues; dimensionality;
D O I
暂无
中图分类号
P9 [自然地理学];
学科分类号
0705 ; 070501 ;
摘要
This papers presents a multi-scale method that computes robust geometric features on lidar point clouds in order to retrieve the optimal neighborhood size for each point. Three dimensionality features are calculated on spherical neighborhoods at various radius sizes. Based on combinations of the eigenvalues of the local structure tensor, they describe the shape of the neighborhood, indicating whether the local geometry is more linear (1D), planar (2D) or volumetric (3D). Two radius-selection criteria have been tested and compared for finding automatically the optimal neighborhood radius for each point. Besides, such procedure allows a dimensionality labelling, giving significant hints for classification and segmentation purposes. The method is successfully applied to 3D point clouds from airborne, terrestrial, and mobile mapping systems since no a priori knowledge on the distribution of the 3D points is required. Extracted dimensionality features and labellings are then favorably compared to those computed from constant size neighborhoods.
引用
收藏
页码:97 / 102
页数:6
相关论文
共 50 条
  • [31] Not All Points Are Equal: Learning Highly Efficient Point-based Detectors for 3D LiDAR Point Clouds
    Zhang, Yifan
    Hu, Qingyong
    Xu, Guoquan
    Ma, Yanxin
    Wan, Jianwei
    Guo, Yulan
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 18931 - 18940
  • [32] Spark-based in-memory DEM creation from 3D LiDAR point clouds
    Rizki, Permata Nur Miftahur
    Eum, Junho
    Lee, Heezin
    Oh, Sangyoon
    REMOTE SENSING LETTERS, 2017, 8 (04) : 360 - 369
  • [33] CNN-based 3D object classification using Hough space of LiDAR point clouds
    Song, Wei
    Zhang, Lingfeng
    Tian, Yifei
    Fong, Simon
    Li, Jinming
    Gozho, Amanda
    HUMAN-CENTRIC COMPUTING AND INFORMATION SCIENCES, 2020, 10 (01)
  • [34] Global Registration of 3D LiDAR Point Clouds Based on Scene Features: Application to Structured Environments
    Sanchez, Julia
    Denis, Florence
    Checchin, Paul
    Dupont, Florent
    Trassoudaine, Laurent
    REMOTE SENSING, 2017, 9 (10)
  • [35] DAPS3D: Domain Adaptive Projective Segmentation of 3D LiDAR Point Clouds
    Klokov, Alexey A.
    Pak, Di Un
    Khorin, Aleksandr
    Yudin, Dmitry A.
    Kochiev, Leon
    Luchinskiy, Vladimir D.
    Bezuglyj, Vitaly D.
    IEEE ACCESS, 2023, 11 : 79341 - 79356
  • [36] LassoNet: Deep Lasso-Selection of 3D Point Clouds
    Chen, Zhutian
    Zeng, Wei
    Yang, Zhiguang
    Yu, Lingyun
    Fu, Chi-Wing
    Qu, Huamin
    IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2020, 26 (01) : 195 - 204
  • [37] COMPARISON OF 2D AND 3D APPROACHES FOR THE ALIGNMENT OF UAV AND LIDAR POINT CLOUDS
    Persad, Ravi Ancil
    Armenakis, Costas
    INTERNATIONAL CONFERENCE ON UNMANNED AERIAL VEHICLES IN GEOMATICS (VOLUME XLII-2/W6), 2017, 42-2 (W6): : 275 - 279
  • [38] SCALE RATIO ICP FOR 3D POINT CLOUDS WITH DIFFERENT SCALES
    Lin, Baowei
    Tamaki, Toru
    Raytchev, Bisser
    Kaneda, Kazufumi
    Ichii, Koji
    2013 20TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP 2013), 2013, : 2217 - 2221
  • [39] Reflection Removal for Large-Scale 3D Point Clouds
    Yun, Jae-Seong
    Sim, Jae-Young
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 4597 - 4605
  • [40] EFNet: enhancing feature information for 3D object detection in LiDAR point clouds
    Meng, Xin
    Zhou, Yuan
    Du, Kaiyue
    Ma, Jun
    Meng, Jin
    Kumar, Aakash
    Lv, Jiahang
    Kim, Jonghyuk
    Wang, Shifeng
    JOURNAL OF THE OPTICAL SOCIETY OF AMERICA A-OPTICS IMAGE SCIENCE AND VISION, 2024, 41 (04) : 739 - 748