Unsupervised Depth Completion From Visual Inertial Odometry

被引:71
|
作者
Wong, Alex [1 ]
Fei, Xiaohan [1 ]
Tsuei, Stephanie [1 ]
Soatto, Stefano [1 ]
机构
[1] Univ Calif Los Angeles, Samueli Sch Engn, Comp Sci Dept, Los Angeles, CA 90095 USA
关键词
Visual learning; sensor fusion;
D O I
10.1109/LRA.2020.2969938
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
We describe a method to infer dense depth from camera motion and sparse depth as estimated using a visual-inertial odometry system. Unlike other scenarios using point clouds from lidar or structured light sensors, we have few hundreds to few thousand points, insufficient to inform the topology of the scene. Our method first constructs a piecewise planar scaffolding of the scene, and then uses it to infer dense depth using the image along with the sparse points. We use a predictive cross-modal criterion, akin to "self-supervision," measuring photometric consistency across time, forward-backward pose consistency, and geometric compatibility with the sparse point cloud. We also present the first visual-inertial + depth dataset, which we hope will foster additional exploration into combining the complementary strengths of visual and inertial sensors. To compare our method to prior work, we adopt the unsupervised KITTI depth completion benchmark, where we achieve state-of-the-art performance.
引用
收藏
页码:1899 / 1906
页数:8
相关论文
共 50 条
  • [31] UnDeepLIO: Unsupervised Deep Lidar-Inertial Odometry
    Tu, Yiming
    Xie, Jin
    PATTERN RECOGNITION, ACPR 2021, PT II, 2022, 13189 : 189 - 202
  • [32] A review of visual inertial odometry from filtering and optimisation perspectives
    Gui, Jianjun
    Gu, Dongbing
    Wang, Sen
    Hu, Huosheng
    ADVANCED ROBOTICS, 2015, 29 (20) : 1289 - 1301
  • [33] Unsupervised Deep Visual-Inertial Odometry with Online Error Correction for RGB-D Imagery
    Shamwell, E. Jared
    Lindgren, Kyle
    Leung, Sarah
    Nothwang, William D.
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2020, 42 (10) : 2478 - 2493
  • [34] A Depth-added Visual-Inertial Odometry Based on MEMS IMU with Fast Initialization
    Zhang, Yuhan
    Zhao, Haocheng
    Du, Shuang
    Yu, Limin
    Wang, Xinheng
    2022 HUMAN-CENTERED COGNITIVE SYSTEMS, HCCS, 2022, : 64 - 70
  • [35] SelfVIO: Self-supervised deep monocular Visual-Inertial Odometry and depth estimation
    Almalioglu, Yasin
    Turan, Mehmet
    Saputra, Muhamad Risqi U.
    de Gusmao, Pedro P. B.
    Markham, Andrew
    Trigoni, Niki
    NEURAL NETWORKS, 2022, 150 : 119 - 136
  • [36] Visual Inertial Odometry with Pentafocal Geometric Constraints
    Pyojin Kim
    Hyon Lim
    H. Jin Kim
    International Journal of Control, Automation and Systems, 2018, 16 : 1962 - 1970
  • [37] Visual and Inertial Odometry for a Disaster Recovery Humanoid
    George, Michael
    Tardif, Jean-Philippe
    Kelly, Alonzo
    FIELD AND SERVICE ROBOTICS, 2015, 105 : 501 - 514
  • [38] Visual and inertial odometry based on sensor fusion
    Troncoso, Juan Manuel Reyes
    Correa, Alexander Ceron
    2024 XXIV SYMPOSIUM OF IMAGE, SIGNAL PROCESSING, AND ARTIFICIAL VISION, STSIVA 2024, 2024,
  • [39] Visual Inertial Odometry with Pentafocal Geometric Constraints
    Kim, Pyojin
    Lim, Hyon
    Kim, H. Jin
    INTERNATIONAL JOURNAL OF CONTROL AUTOMATION AND SYSTEMS, 2018, 16 (04) : 1962 - 1970
  • [40] Compass aided visual-inertial odometry
    Wang, Yandong
    Zhang, Tao
    Wang, Yuanchao
    Ma, Jingwei
    Li, Yanhui
    Han, Jingzhuang
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2019, 60 : 101 - 115