Robust and efficient cpu-based rgb-d scene reconstruction

被引:0
|
作者
Li J. [1 ,2 ]
Gao W. [1 ,2 ]
Li H. [1 ,2 ]
Tang F. [1 ,2 ]
Wu Y. [1 ,2 ]
机构
[1] National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing
[2] School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing
来源
Gao, Wei (wgao@nlpr.ia.ac.cn) | 2018年 / MDPI AG卷 / 18期
基金
中国国家自然科学基金;
关键词
3D reconstruction; Camera tracking; Simultaneous localization and mapping (SLAM); Volumetric integration;
D O I
10.3390/S18113652
中图分类号
学科分类号
摘要
3D scene reconstruction is an important topic in computer vision. A complete scene is reconstructed from views acquired along the camera trajectory, each view containing a small part of the scene. Tracking in textureless scenes is well known to be a Gordian knot of camera tracking, and how to obtain accurate 3D models quickly is a major challenge for existing systems. For the application of robotics, we propose a robust CPU-based approach to reconstruct indoor scenes efficiently with a consumer RGB-D camera. The proposed approach bridges feature-based camera tracking and volumetric-based data integration together and has a good reconstruction performance in terms of both robustness and efficiency. The key points in our approach include: (i) a robust and fast camera tracking method combining points and edges, which improves tracking stability in textureless scenes; (ii) an efficient data fusion strategy to select camera views and integrate RGB-D images on multiple scales, which enhances the efficiency of volumetric integration; (iii) a novel RGB-D scene reconstruction system, which can be quickly implemented on a standard CPU. Experimental results demonstrate that our approach reconstructs scenes with higher robustness and efficiency compared to state-of-the-art reconstruction systems. © 2018 by the authors. Licensee MDPI, Basel, Switzerland.
引用
收藏
相关论文
共 50 条
  • [1] Robust and Efficient CPU-Based RGB-D Scene Reconstruction
    Li, Jianwei
    Gao, Wei
    Li, Heping
    Tang, Fulin
    Wu, Yihong
    SENSORS, 2018, 18 (11)
  • [2] Crime Scene Reconstruction with RGB-D Sensors
    Amamra, Abdenour
    Amara, Yacine
    Boumaza, Khalid
    Benayad, Aissa
    PROCEEDINGS OF THE 2019 FEDERATED CONFERENCE ON COMPUTER SCIENCE AND INFORMATION SYSTEMS (FEDCSIS), 2019, : 391 - 396
  • [3] Efficient Scene Simulation for Robust Monte Carlo Localization using an RGB-D Camera
    Fallon, Maurice F.
    Johannsson, Hordur
    Leonard, John J.
    2012 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2012, : 1663 - 1670
  • [4] A 3D Reconstruction System for Large Scene Based on RGB-D Image
    Wang, Hongren
    Wang, Pengbo
    Wang, Xiaodi
    Peng, Tianchen
    Zhang, Baochang
    INTELLIGENCE SCIENCE AND BIG DATA ENGINEERING, 2018, 11266 : 518 - 527
  • [5] Three-dimentional reconstruction of semantic scene based on RGB-D map
    Lin J.-H.
    Wang Y.-J.
    Guangxue Jingmi Gongcheng/Optics and Precision Engineering, 2018, 26 (05): : 1231 - 1241
  • [6] Robust 3D Reconstruction With an RGB-D Camera
    Wang, Kangkan
    Zhang, Guofeng
    Bao, Hujun
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2014, 23 (11) : 4893 - 4906
  • [7] SUN RGB-D: A RGB-D Scene Understanding Benchmark Suite
    Song, Shuran
    Lichtenberg, Samuel P.
    Xiao, Jianxiong
    2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2015, : 567 - 576
  • [8] Efficient and Robust Indoor People Detection based on RGB-D Camera
    He, Qi
    Liu, Kuixiang
    Qu, Lei
    2017 CHINESE AUTOMATION CONGRESS (CAC), 2017, : 1063 - 1068
  • [9] Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis
    Seichter, Daniel
    Koehler, Mona
    Lewandowski, Benjamin
    Wengefeld, Tim
    Gross, Horst-Michael
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 13525 - 13531
  • [10] Multi-robot collaborative SLAM and scene reconstruction based on RGB-D camera
    Ma, Tianyun
    Zhang, Tao
    Li, Shaopeng
    2020 CHINESE AUTOMATION CONGRESS (CAC 2020), 2020, : 139 - 144