Roadside Multiple Objects Extraction from Mobile Laser Scanning Point Cloud Based on DBN

被引:0
|
作者
Luo H. [1 ,2 ,3 ]
Fang L. [1 ,2 ,3 ]
Chen C. [1 ,2 ,3 ]
Huang Z. [1 ,2 ,3 ]
机构
[1] National Engineering Research Centre of Geospatial Information Technology, Fuzhou University, Fuzhou
[2] Key Laboratory of Spatial Data Mining and Information Sharing of Ministry of Education, Fuzhou University, Fuzhou
[3] Spatial Information Research Center of Fujian Province, Fuzhou University, Fuzhou
来源
Fang, Lina (fangln@fzu.edu.cn) | 2018年 / SinoMaps Press卷 / 47期
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
Deep belief network (DBN); Deep learning; MLS point cloud; Point cloud segmentation; Road side objects extraction;
D O I
10.11947/j.AGCS.2018.20170524
中图分类号
学科分类号
摘要
This paper proposed an novel algorithm for exploring deep belief network (DBN) architectures to extract and recognize roadside facilities (trees, cars and traffic poles) from mobile laser scanning (MLS) point cloud.The proposed methods firstly partitioned the raw MLS point cloud into blocks and then removed the ground and building points.In order to partition the off-ground objects into individual objects, off-ground points were organized into an Octree structure and clustered into candidate objects based on connected component.To improve segmentation performance on clusters containing overlapped objects, a refining processing using a voxel-based normalized cut was then implemented.In addition, multi-view features descriptor was generated for each independent roadside facilities based on binary images.Finally, a deep belief network (DBN) was trained to extract trees, cars and traffic pole objects.Experiments are undertaken to evaluate the validities of the proposed method with two datasets acquired by Lynx Mobile Mapper System.The precision of trees, cars and traffic poles objects extraction results respectively was 97.31%, 97.79% and 92.78%.The recall was 98.30%, 98.75% and 96.77% respectively.The quality is 95.70%, 93.81% and 90.00%.And the F1 measure was 97.80%, 96.81% and 94.73%. © 2018, Surveying and Mapping Press. All right reserved.
引用
收藏
页码:234 / 246
页数:12
相关论文
共 33 条
  • [1] Yang B., Wei Z., Li Q., Et al., Automated Extraction of Street-scene Objects from Mobile LiDAR Point Clouds, International Journal of Remote Sensing, 33, 18, pp. 5839-5861, (2012)
  • [2] Lin Y., Hyyppa J., k-segments-based Geometric Modeling of VLS Scan Lines, IEEE Geoscience and Remote Sensing Letters, 8, 1, pp. 93-97, (2011)
  • [3] Li T., Zhan Q., Yu L., A Classification Method for Mobile Laser Scanning Data Based on Object Feature Extraction, Remote Sensing for Land & Resources, 92, 1, pp. 17-21, (2012)
  • [4] Dong Z., Yang B., Hierarchical Extraction of Multiple Objects from Mobile Laser Scanning Data, Acta Geodaetica et Cartographica Sinica, 44, 9, pp. 980-987, (2015)
  • [5] Pu S., Rutzinger M., Vosselman G., Et al., Recognizing Basic Structures from Mobile Laser Scanning Data for Road Inventory Studies, ISPRS Journal of Photogrammetry and Remote Sensing, 66, pp. S28-S39, (2011)
  • [6] Zai D., Chen Y., Li J., Et al., Inventory of 3D Street Lighting Poles Using Mobile Laser Scanning Point Clouds, Proceedings of 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), pp. 573-576, (2015)
  • [7] Huang P., Chen Y., Li J., Et al., Extraction of Street Trees from Mobile Laser Scanning Point Clouds Based on Subdivided Dimensional Features, Proceedings of 2015 IEEE International Geoscience and Remote Sensing Symposium, pp. 557-560, (2015)
  • [8] Yu Y., Li J., Li J., Et al., Pairwise Three-dimensional Shape Context for Partial Object Matching and Retrieval on Mobile Laser Scanning Data, IEEE Geoscience and Remote Sensing Letters, 11, 5, pp. 1019-1023, (2014)
  • [9] Yu Y., Li J., Guan H., Et al., Automated Extraction of 3D Trees from Mobile LiDAR Point Clouds, ISPRS International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, pp. 629-632, (2014)
  • [10] Basu S., Ganguly S., Mukhopadhyay S., Et al., DeepSat: A Learning Framework for Satellite Imagery, Proceedings of the 23rd SIGSPATIAL International Conference on Advances in Geographic Information Systems, (2015)