Parallel Point Clouds: Hybrid Point Cloud Generation and 3D Model Enhancement via Virtual-Real Integration

被引:20
作者
Tian, Yonglin [1 ,2 ]
Wang, Xiao [2 ,3 ]
Shen, Yu [2 ,4 ]
Guo, Zhongzheng [2 ]
Wang, Zilei [1 ]
Wang, Fei-Yue [2 ,3 ]
机构
[1] Univ Sci & Technol China, Dept Automat, Hefei 230027, Peoples R China
[2] Chinese Acad Sci, Inst Automat, State Key Lab Management & Control Complex Syst, Beijing 100190, Peoples R China
[3] Qingdao Acad Intelligent Ind, Qingdao 266000, Peoples R China
[4] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100091, Peoples R China
基金
中国国家自然科学基金;
关键词
virtual LiDAR; hybrid point clouds; virtual-real interaction; 3D detection; SYSTEMS; VISION;
D O I
10.3390/rs13152868
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Three-dimensional information perception from point clouds is of vital importance for improving the ability of machines to understand the world, especially for autonomous driving and unmanned aerial vehicles. Data annotation for point clouds is one of the most challenging and costly tasks. In this paper, we propose a closed-loop and virtual-real interactive point cloud generation and model-upgrading framework called Parallel Point Clouds (PPCs). To our best knowledge, this is the first time that the training model has been changed from an open-loop to a closed-loop mechanism. The feedback from the evaluation results is used to update the training dataset, benefiting from the flexibility of artificial scenes. Under the framework, a point-based LiDAR simulation model is proposed, which greatly simplifies the scanning operation. Besides, a group-based placing method is put forward to integrate hybrid point clouds, via locating candidate positions for virtual objects in real scenes. Taking advantage of the CAD models and mobile LiDAR devices, two hybrid point cloud datasets, i.e., ShapeKITTI and MobilePointClouds, are built for 3D detection tasks. With almost zero labor cost on data annotation for newly added objects, the models (PointPillars) trained with ShapeKITTI and MobilePointClouds achieved 78.6% and 60.0% of the average precision of the model trained with real data on 3D detection, respectively.
引用
收藏
页数:17
相关论文
共 41 条
[1]  
[Anonymous], 2015, ADV NEURAL INFORM PR
[2]  
Bewley Alex, 2020, C ROB LEARN
[3]   Multi-View 3D Object Detection Network for Autonomous Driving [J].
Chen, Xiaozhi ;
Ma, Huimin ;
Wan, Ji ;
Li, Bo ;
Xia, Tian .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :6526-6534
[4]   AutoAugment: Learning Augmentation Strategies from Data [J].
Cubuk, Ekin D. ;
Zoph, Barret ;
Mane, Dandelion ;
Vasudevan, Vijay ;
Le, Quoc V. .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :113-123
[5]   Shape Completion using 3D-Encoder-Predictor CNNs and Shape Synthesis [J].
Dai, Angela ;
Qi, Charles Ruizhongtai ;
Niessner, Matthias .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :6545-6554
[6]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[7]  
Dosovitskiy A, 2017, CARLA: an open urban driving simulator, DOI DOI 10.48550/ARXIV.1711.03938
[8]   Augmented LiDAR Simulator for Autonomous Driving [J].
Fang, Jin ;
Zhou, Dingfu ;
Yan, Feilong ;
Zhao, Tongtong ;
Zhang, Feihu ;
Ma, Yu ;
Wang, Liang ;
Yang, Ruigang .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2020, 5 (02) :1931-1938
[9]   Parallel Driving in CPSS: A Unified Approach for Transport Automation and Vehicle Intelligence [J].
Wang, Fei-Yue ;
Zheng, Nan-Ning ;
Cao, Dongpu ;
Martinez, Clara Marina ;
Li, Li ;
Liu, Teng .
IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2017, 4 (04) :577-587
[10]   Virtual Worlds as Proxy for Multi-Object Tracking Analysis [J].
Gaidon, Adrien ;
Wang, Qiao ;
Cabon, Yohann ;
Vig, Eleonora .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :4340-4349