Unsupervised Monocular Visual-inertial Odometry Network

被引:0
|
作者
Wei, Peng [1 ,2 ]
Hua, Guoliang [1 ]
Huang, Weibo [1 ]
Meng, Fanyang [2 ]
Liu, Hong [1 ,2 ]
机构
[1] Peking Univ, Shenzhen Grad Sch, Key Lab Machine Percept, Shenzhen, Peoples R China
[2] Peng Cheng Lab, Shenzhen, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, unsupervised methods for monocular visual odometry (VO), with no need for quantities of expensive labeled ground truth, have attracted much attention. However, these methods are inadequate for long-term odometry task, due to the inherent limitation of only using monocular visual data and the inability to handle the error accumulation problem. By utilizing supplemental low-cost inertial measurements, and exploiting the multi-view geometric constraint and sequential constraint, an unsupervised visual-inertial odometry framework (UnVIO) is proposed in this paper. Our method is able to predict the per-frame depth map, as well as extracting and self-adaptively fusing visual-inertial motion features from image-IMU stream to achieve long-term odometry task. A novel sliding window optimization strategy, which consists of an intra-window and an inter-window optimization, is introduced for overcoming the error accumulation and scale ambiguity problems. The intrawindow optimization restrains the geometric inferences within the window through checking the photometric consistency. And the inter-window optimization checks the 3D geometric consistency and trajectory consistency among predictions of separate windows. Extensive experiments have been conducted on KITTI and Malaga datasets to demonstrate the superiority of UnVIO over other state-of-the-art VO / VIO methods. The codes are open-source(1).
引用
收藏
页码:2347 / 2354
页数:8
相关论文
共 50 条
  • [31] Monocular visual-inertial odometry leveraging point-line features with structural constraints
    Zhang, Jiahui
    Yang, Jinfu
    Ma, Jiaqi
    VISUAL COMPUTER, 2024, 40 (02): : 647 - 661
  • [32] Monocular visual-inertial odometry leveraging point-line features with structural constraints
    Jiahui Zhang
    Jinfu Yang
    Jiaqi Ma
    The Visual Computer, 2024, 40 (2) : 647 - 661
  • [33] Monocular Visual-Inertial Depth Estimation
    Wofk, Diana
    Ranftl, Rene
    Muller, Matthias
    Koltun, Vladlen
    2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA, 2023, : 6095 - 6101
  • [34] A novel visual-inertial Monocular SLAM
    Yue, Xiaofeng
    Zhang, Wenjuan
    Xu, Li
    Liu, JiangGuo
    MIPPR 2017: AUTOMATIC TARGET RECOGNITION AND NAVIGATION, 2018, 10608
  • [35] The YTU dataset and recurrent neural network based visual-inertial odometry
    Gurturk, Mert
    Yusefi, Abdullah
    Aslan, Muhammet Fatih
    Soycan, Metin
    Durdu, Akif
    Masiero, Andrea
    MEASUREMENT, 2021, 184
  • [36] A Partial Sparsification Scheme for Visual-Inertial Odometry
    Zhu, Zhikai
    Wang, Wei
    2020 IEEE/ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS (AIM), 2020, : 1983 - 1989
  • [37] ADVIO: An Authentic Dataset for Visual-Inertial Odometry
    Cortes, Santiago
    Solin, Arno
    Rahtu, Esa
    Kannala, Juho
    COMPUTER VISION - ECCV 2018, PT X, 2018, 11214 : 425 - 440
  • [38] Direct Visual-Inertial Odometry with Stereo Cameras
    Usenko, Vladyslav
    Engel, Jakob
    Stueckler, Joerg
    Cremers, Daniel
    2016 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2016, : 1885 - 1892
  • [39] Visual-Inertial Odometry with Point and Line Features
    Yang, Yulin
    Geneva, Patrick
    Eckenhoff, Kevin
    Huang, Guoquan
    2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 2447 - 2454
  • [40] The First Attempt of SAR Visual-Inertial Odometry
    Liu, Junbin
    Qiu, Xiaolan
    Ding, Chibiao
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2021, 59 (01): : 287 - 304