Unsupervised Depth Completion From Visual Inertial Odometry

被引:71
|
作者
Wong, Alex [1 ]
Fei, Xiaohan [1 ]
Tsuei, Stephanie [1 ]
Soatto, Stefano [1 ]
机构
[1] Univ Calif Los Angeles, Samueli Sch Engn, Comp Sci Dept, Los Angeles, CA 90095 USA
关键词
Visual learning; sensor fusion;
D O I
10.1109/LRA.2020.2969938
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
We describe a method to infer dense depth from camera motion and sparse depth as estimated using a visual-inertial odometry system. Unlike other scenarios using point clouds from lidar or structured light sensors, we have few hundreds to few thousand points, insufficient to inform the topology of the scene. Our method first constructs a piecewise planar scaffolding of the scene, and then uses it to infer dense depth using the image along with the sparse points. We use a predictive cross-modal criterion, akin to "self-supervision," measuring photometric consistency across time, forward-backward pose consistency, and geometric compatibility with the sparse point cloud. We also present the first visual-inertial + depth dataset, which we hope will foster additional exploration into combining the complementary strengths of visual and inertial sensors. To compare our method to prior work, we adopt the unsupervised KITTI depth completion benchmark, where we achieve state-of-the-art performance.
引用
收藏
页码:1899 / 1906
页数:8
相关论文
共 50 条
  • [21] Unsupervised Monocular Estimation of Depth and Visual Odometry Using Attention and Depth-Pose Consistency Loss
    Song, Xiaogang
    Hu, Haoyue
    Liang, Li
    Shi, Weiwei
    Xie, Guo
    Lu, Xiaofeng
    Hei, Xinhong
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 3517 - 3529
  • [22] An Equivariant Filter for Visual Inertial Odometry
    van Goor, Pieter
    Mahony, Robert
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 14432 - 14438
  • [23] Robocentric visual-inertial odometry
    Huai, Zheng
    Huang, Guoquan
    INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2022, 41 (07): : 667 - 689
  • [24] Robocentric Visual-Inertial Odometry
    Huai, Zheng
    Huang, Guoquan
    2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2018, : 6319 - 6326
  • [25] Cooperative Visual-Inertial Odometry
    Zhu, Pengxiang
    Yang, Yulin
    Ren, Wei
    Huang, Guoquan
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 13135 - 13141
  • [26] Robust Depth-Aided Visual-Inertial-Wheel Odometry for Mobile Robots
    Zhao, Xinyang
    Li, Qinghua
    Wang, Changhong
    Dou, Hexuan
    Liu, Bo
    IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2024, 71 (08) : 9161 - 9171
  • [27] GANVO: Unsupervised Deep Monocular Visual Odometry and Depth Estimation with Generative Adversarial Networks
    Almalioglu, Yasin
    Saputra, Muhamad Risqi U.
    de Gusmao, Pedro P. B.
    Markham, Andrew
    Trigoni, Niki
    2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 5474 - 5480
  • [28] EndoSLAM dataset and an unsupervised monocular visual odometry and depth estimation approach for endoscopic videos
    Ozyoruk, Kutsev Bengisu
    Gokceler, Guliz Irem
    Bobrow, Taylor L.
    Coskun, Gulfize
    Incetan, Kagan
    Almalioglu, Yasin
    Mahmood, Faisal
    Curto, Eva
    Perdigoto, Luis
    Oliveira, Marina
    Sahin, Hasan
    Araujo, Helder
    Alexandrino, Henrique
    Durr, Nicholas J.
    Gilbert, Hunter B.
    Turan, Mehmet
    MEDICAL IMAGE ANALYSIS, 2021, 71
  • [29] SGANVO: Unsupervised Deep Visual Odometry and Depth Estimation With Stacked Generative Adversarial Networks
    Feng, Tuo
    Gu, Dongbing
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2019, 4 (04): : 4431 - 4437
  • [30] Radar Visual Inertial Odometry and Radar Thermal Inertial Odometry: Robust Navigation even in Challenging Visual Conditions
    Doer, Christopher
    Trommer, Gert F.
    2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 331 - 338