Unsupervised Monocular Visual-inertial Odometry Network

被引:0
|
作者
Wei, Peng [1 ,2 ]
Hua, Guoliang [1 ]
Huang, Weibo [1 ]
Meng, Fanyang [2 ]
Liu, Hong [1 ,2 ]
机构
[1] Peking Univ, Shenzhen Grad Sch, Key Lab Machine Percept, Shenzhen, Peoples R China
[2] Peng Cheng Lab, Shenzhen, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, unsupervised methods for monocular visual odometry (VO), with no need for quantities of expensive labeled ground truth, have attracted much attention. However, these methods are inadequate for long-term odometry task, due to the inherent limitation of only using monocular visual data and the inability to handle the error accumulation problem. By utilizing supplemental low-cost inertial measurements, and exploiting the multi-view geometric constraint and sequential constraint, an unsupervised visual-inertial odometry framework (UnVIO) is proposed in this paper. Our method is able to predict the per-frame depth map, as well as extracting and self-adaptively fusing visual-inertial motion features from image-IMU stream to achieve long-term odometry task. A novel sliding window optimization strategy, which consists of an intra-window and an inter-window optimization, is introduced for overcoming the error accumulation and scale ambiguity problems. The intrawindow optimization restrains the geometric inferences within the window through checking the photometric consistency. And the inter-window optimization checks the 3D geometric consistency and trajectory consistency among predictions of separate windows. Extensive experiments have been conducted on KITTI and Malaga datasets to demonstrate the superiority of UnVIO over other state-of-the-art VO / VIO methods. The codes are open-source(1).
引用
收藏
页码:2347 / 2354
页数:8
相关论文
共 50 条
  • [21] BooM-VIO: Bootstrapped Monocular Visual-Inertial Odometry with Absolute Trajectory Estimation through Unsupervised Deep Learning
    Lindgren, Kyle
    Leung, Sarah
    Nothwang, William D.
    Shamwell, E. Jared
    2019 19TH INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS (ICAR), 2019, : 516 - 522
  • [22] Compass aided visual-inertial odometry
    Wang, Yandong
    Zhang, Tao
    Wang, Yuanchao
    Ma, Jingwei
    Li, Yanhui
    Han, Jingzhuang
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2019, 60 : 101 - 115
  • [23] Information Sparsification in Visual-Inertial Odometry
    Hsiung, Jerry
    Hsiao, Ming
    Westman, Eric
    Valencia, Rafael
    Kaess, Michael
    2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2018, : 1146 - 1153
  • [24] Unsupervised Scale Network for Monocular Relative Depth and Visual Odometry
    Wang, Zhongyi
    Chen, Qijun
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2024, 73
  • [25] Towards Fully Dense Direct Filter-Based Monocular Visual-Inertial Odometry
    Hardt-Stremayr, Alexander
    Weiss, Stephan
    2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 4710 - 4716
  • [26] A nonlinear optimization-based monocular dense mapping system of visual-inertial odometry
    Fan, Chuanliu
    Hou, Junyi
    Yu, Lei
    MEASUREMENT, 2021, 180
  • [27] Towards Robust Visual-Inertial Odometry with Multiple Non-Overlapping Monocular Cameras
    He, Yao
    Yu, Huai
    Yang, Wen
    Scherer, Sebastian
    2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 9452 - 9458
  • [28] A Real-Time Visual-Inertial Monocular Odometry by Fusing Point and Line Features
    Li, Chengwei
    Yan, Liping
    Xia, Yuanqing
    2021 PROCEEDINGS OF THE 40TH CHINESE CONTROL CONFERENCE (CCC), 2021, : 4085 - 4090
  • [29] DFF-VIO: A General Dynamic Feature Fused Monocular Visual-Inertial Odometry
    Luo, Nan
    Hu, Zhexuan
    Ding, Yuan
    Li, Jiaxu
    Zhao, Hui
    Liu, Gang
    Wang, Quan
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2025, 35 (02) : 1758 - 1773
  • [30] SelfVIO: Self-supervised deep monocular Visual-Inertial Odometry and depth estimation
    Almalioglu, Yasin
    Turan, Mehmet
    Saputra, Muhamad Risqi U.
    de Gusmao, Pedro P. B.
    Markham, Andrew
    Trigoni, Niki
    NEURAL NETWORKS, 2022, 150 : 119 - 136