Robust and efficient cpu-based rgb-d scene reconstruction

被引:0
|
作者
Li J. [1 ,2 ]
Gao W. [1 ,2 ]
Li H. [1 ,2 ]
Tang F. [1 ,2 ]
Wu Y. [1 ,2 ]
机构
[1] National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing
[2] School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing
来源
Gao, Wei (wgao@nlpr.ia.ac.cn) | 2018年 / MDPI AG卷 / 18期
基金
中国国家自然科学基金;
关键词
3D reconstruction; Camera tracking; Simultaneous localization and mapping (SLAM); Volumetric integration;
D O I
10.3390/S18113652
中图分类号
学科分类号
摘要
3D scene reconstruction is an important topic in computer vision. A complete scene is reconstructed from views acquired along the camera trajectory, each view containing a small part of the scene. Tracking in textureless scenes is well known to be a Gordian knot of camera tracking, and how to obtain accurate 3D models quickly is a major challenge for existing systems. For the application of robotics, we propose a robust CPU-based approach to reconstruct indoor scenes efficiently with a consumer RGB-D camera. The proposed approach bridges feature-based camera tracking and volumetric-based data integration together and has a good reconstruction performance in terms of both robustness and efficiency. The key points in our approach include: (i) a robust and fast camera tracking method combining points and edges, which improves tracking stability in textureless scenes; (ii) an efficient data fusion strategy to select camera views and integrate RGB-D images on multiple scales, which enhances the efficiency of volumetric integration; (iii) a novel RGB-D scene reconstruction system, which can be quickly implemented on a standard CPU. Experimental results demonstrate that our approach reconstructs scenes with higher robustness and efficiency compared to state-of-the-art reconstruction systems. © 2018 by the authors. Licensee MDPI, Basel, Switzerland.
引用
收藏
相关论文
共 50 条
  • [41] Robust RGB-D visual odometry based on edges and points
    Yao, Erliang
    Zhang, Hexin
    Xu, Hui
    Song, Haitao
    Zhang, Guoliang
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2018, 107 : 209 - 220
  • [42] Efficient CPU-based volume ray tracing techniques
    Marmitt, Gerd
    Friedrich, Heiko
    Slusallek, Philipp
    COMPUTER GRAPHICS FORUM, 2008, 27 (06) : 1687 - 1709
  • [43] 3D Disaster Scene Reconstruction Using a Canine-Mounted RGB-D Sensor
    Tran, Jimmy
    Ufkes, Alex
    Ferworn, Alex
    Fiala, Mark
    2013 INTERNATIONAL CONFERENCE ON COMPUTER AND ROBOT VISION (CRV), 2013, : 23 - 28
  • [44] RGB-D Multi-View System Calibration for Full 3D Scene Reconstruction
    Afzal, Hassan
    Aouada, Djamila
    Fofi, David
    Mirbach, Bruno
    Ottersten, Bjoern
    2014 22ND INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2014, : 2459 - 2464
  • [45] 21/2D Scene Reconstruction of Indoor Scenes from Single RGB-D Images
    Neverova, Natalia
    Muselet, Damien
    Tremeau, Alain
    COMPUTATIONAL COLOR IMAGING, CCIW 2013, 2013, 7786 : 281 - 295
  • [46] Completed Dense Scene Flow in RGB-D Space
    Wang, Yucheng
    Zhang, Jian
    Liu, Zicheng
    Wu, Qiang
    Chou, Philip
    Zhang, Zhengyou
    Jia, Yunde
    COMPUTER VISION - ACCV 2014 WORKSHOPS, PT I, 2015, 9008 : 191 - 205
  • [47] A method proposal of scene recognition for RGB-D cameras
    Danciu, Gabriel-Mihail
    2016 IEEE 11TH INTERNATIONAL SYMPOSIUM ON APPLIED COMPUTATIONAL INTELLIGENCE AND INFORMATICS (SACI), 2016, : 301 - 304
  • [48] Intrinsic Scene Decomposition from RGB-D images
    Hachama, Mohammed
    Ghanem, Bernard
    Wonka, Peter
    2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 810 - 818
  • [49] Learning Effective RGB-D Representations for Scene Recognition
    Song, Xinhang
    Jiang, Shuqiang
    Herranz, Luis
    Chen, Chengpeng
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (02) : 980 - 993
  • [50] RGB-D Scene Segmentation with Conditional Random Field
    Nasab, Sara Ershadi
    Kasaei, Shohreh
    Sanaei, Esmaeil
    2014 6TH CONFERENCE ON INFORMATION AND KNOWLEDGE TECHNOLOGY (IKT), 2014, : 134 - 139