RGB-D dense mapping with feature-based method

被引:0
|
作者
Fu, Xingyin [1 ,2 ,3 ,4 ]
Zhu, Feng [1 ,3 ,4 ]
Wu, Qingxiao [1 ,3 ,4 ]
Lu, Rongrong [1 ,2 ,3 ,4 ]
机构
[1] Chinese Acad Sci, Shenyang Inst Automat, Shenyang 110016, Liaoning, Peoples R China
[2] Univ Chinese Acad Sci, Beijing 100049, Peoples R China
[3] Chinese Acad Sci, Key Lab Optoelect Informat Proc, Shenyang 110016, Liaoning, Peoples R China
[4] Key Lab Image Understanding & Comp Vis, Shenyang 110016, Liaoning, Peoples R China
关键词
dense SLAM; RGB-D camera; TSDF; reconstruction; real-time; SLAM;
D O I
10.1117/12.2505305
中图分类号
O43 [光学];
学科分类号
070207 ; 0803 ;
摘要
Simultaneous Localization and Mapping (SLAM) plays an important role in navigation and augmented reality (AR) systems. While feature-based visual SLAM has reached a pre-mature stage, RGB-D-based dense SLAM becomes popular since the birth of consumer RGB-D cameras. Different with the feature-based visual SLAM systems, RGB-D-based dense SLAM systems, for example, KinectFusion, calculate camera poses by registering the current frame with the images raycasted from the global model and produce a dense surface by fusing the RGB-D stream. In this paper, we propose a novel reconstruction system. Our system is built on ORB-SLAM2. To generate the dense surface in real-time, we first propose to use truncated signed distance function (TSDF) to fuse the RGB-D frames. Because camera tracking drift is inevitable, it is unwise to represent the entire reconstruction space with a TSDF model or utilize the voxel hashing approach to represent the entire measured surface. We use moving volume proposed in Kintinuous to represent the reconstruction region around the current frame frustum. Different with Kintinuous which corrects the points with embedded deformation graph after pose graph optimization, we re-fuse the images with the optimized camera poses and produce the dense surface again after the user ends the scanning. Second, we use the reconstructed dense map to filter out the outliers of the features in the sparse feature map. The depth maps of the keyframes are raycasted from the TSDF volume according to the camera pose. The feature points in the local map are projected into the nearest keyframe. If the discrepancy between depth values of the feature and the corresponding point in the depth map exceeds the threshold, the feature is considered as an outlier and removed from the feature map. The discrepancy value is also combined with feature pyramid layer to calculate the information matrix when minimizing the reprojection error. The features in the sparse map reconstructed near the produced dense surface will impose large influence in camera tracking. We compare the accuracy of the produced camera trajectories as well as the 3D models to the state-of-the-art systems on the TUM and ICL-NIUM RGB-D benchmark datasets. Experimental results show our system achieves state-of-the-art results.
引用
收藏
页数:10
相关论文
共 50 条
  • [31] Dense Continuous-Time Tracking and Mapping with Rolling Shutter RGB-D Cameras
    Kerl, Christian
    Stueckler, Joerg
    Cremers, Daniel
    2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 2264 - 2272
  • [32] Scan Planning for RGB-D Mapping Based on Grid Map
    Cheng, Min
    Wang, Feng
    Chen, Xiaoping
    INTELLIGENT AUTONOMOUS SYSTEMS 13, 2016, 302 : 231 - 243
  • [33] FlowFusion: Dynamic Dense RGB-D SLAM Based on Optical Flow
    Zhang, Tianwei
    Zhang, Huayan
    Li, Yang
    Nakamura, Yoshihiko
    Zhang, Lei
    2020 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2020, : 7322 - 7328
  • [34] Semantic Segmentation based Dense RGB-D SLAM in Dynamic Environments
    Zhang, Jianbo
    Liu, Yanjie
    Chen, Junguo
    Ma, Liulong
    Jin, Dong
    Chen, Jiao
    2019 3RD INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE, AUTOMATION AND CONTROL TECHNOLOGIES (AIACT 2019), 2019, 1267
  • [35] Gmapping Mapping Based on Lidar and RGB-D Camera Fusion
    Li, Quanfeng
    Wu, Haibo
    Chen, Jiang
    Zhang, Yixiao
    LASER & OPTOELECTRONICS PROGRESS, 2023, 60 (12)
  • [36] Real-Time Pixel-Wise Grasp Detection Based on RGB-D Feature Dense Fusion
    Wu, Yongxiang
    Fu, Yili
    Wang, Shuguo
    2021 IEEE INTERNATIONAL CONFERENCE ON MECHATRONICS AND AUTOMATION (IEEE ICMA 2021), 2021, : 970 - 975
  • [37] Development of RGB-D simultaneous localization and mapping in dynamic environments based on motion removal and dense map reconstruction
    Zhao, Genping
    Qiu, Junhao
    Peng, Yeping
    Zhang, Xiaowei
    JOURNAL OF APPLIED REMOTE SENSING, 2022, 16 (04)
  • [38] 3-D Mapping With an RGB-D Camera
    Endres, Felix
    Hess, Juergen
    Sturm, Juergen
    Cremers, Daniel
    Burgard, Wolfram
    IEEE TRANSACTIONS ON ROBOTICS, 2014, 30 (01) : 177 - 187
  • [39] Dense Mapping from Feature-Based Monocular SLAM Based on Depth Prediction
    Duan, Yongli
    Zhang, Jing
    Yang, Lingyu
    2018 IEEE CSAA GUIDANCE, NAVIGATION AND CONTROL CONFERENCE (CGNCC), 2018,
  • [40] Completed Dense Scene Flow in RGB-D Space
    Wang, Yucheng
    Zhang, Jian
    Liu, Zicheng
    Wu, Qiang
    Chou, Philip
    Zhang, Zhengyou
    Jia, Yunde
    COMPUTER VISION - ACCV 2014 WORKSHOPS, PT I, 2015, 9008 : 191 - 205