DeepVIO: Self-supervised Deep Learning of Monocular Visual Inertial Odometry using 3D Geometric Constraints

被引:0
|
作者
Han, Liming [1 ]
Lin, Yimin [1 ]
Du, Guoguang [1 ]
Lian, Shiguo [1 ]
机构
[1] CloudMinds Technol Inc, AI Dept, Beijing 100102, Peoples R China
关键词
ROBUST; END;
D O I
10.1109/iros40897.2019.8968467
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents an self-supervised deep learning network for monocular visual inertial odometry (named DeepVIO). DeepVIO provides absolute trajectory estimation by directly merging 2D optical flow feature (OFF) and Inertial Measurement Unit (IMU) data. Specifically, it firstly estimates the depth and dense 3D point cloud of each scene by using stereo sequences, and then obtains 3D geometric constraints including 3D optical flow and 6-DoF pose as supervisory signals. Note that such 3D optical flow shows robustness and accuracy to dynamic objects and textureless environments. In DeepVIO training, 2D optical flow network is constrained by the projection of its corresponding 3D optical flow, and LSTM-style IMU preintegration network and the fusion network are learned by minimizing the loss functions from ego-motion constraints. Furthermore, we employ an IMU status update scheme to improve IMU pose estimation through updating the additional gyroscope and accelerometer bias. The experimental results on KITTI and EuRoC datasets show that DeepVIO outperforms state-of-the-art learning based methods in terms of accuracy and data adaptability. Compared to the traditional methods, DeepVIO reduces the impacts of inaccurate Camera-IMU calibrations, unsynchronized and missing data.
引用
收藏
页码:6906 / 6913
页数:8
相关论文
共 50 条
  • [41] Self-supervised 3D vehicle detection based on monocular images
    Liu, He
    Sun, Yi
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2024, 127
  • [42] 3D Object Aided Self-Supervised Monocular Depth Estimation
    Wei, Songlin
    Chen, Guodong
    Chi, Wenzheng
    Wang, Zhenhua
    Sun, Lining
    2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 10635 - 10642
  • [43] Self-supervised learning for fine-grained monocular 3D face reconstruction in the wild
    Huang, Dongjin
    Shi, Yongsheng
    Liu, Jinhua
    Tang, Wen
    MULTIMEDIA SYSTEMS, 2024, 30 (04)
  • [44] Using Unsupervised Deep Learning Technique for Monocular Visual Odometry
    Liu, Qiang
    Li, Ruihao
    Hu, Huosheng
    Gu, Dongbing
    IEEE ACCESS, 2019, 7 : 18076 - 18088
  • [45] Hybrid self-supervised monocular visual odometry system based on spatio-temporal features
    Yuan, Shuangjie
    Zhang, Jun
    Lin, Yujia
    Yang, Lu
    ELECTRONIC RESEARCH ARCHIVE, 2024, 32 (05): : 3543 - 3568
  • [46] GraphAVO: Self-Supervised Visual Odometry Based on Graph-Assisted Geometric Consistency
    Song, Rujun
    Liu, Jiaqi
    Xiao, Zhuoling
    Yan, Bo
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024,
  • [47] GraphAVO: Self-Supervised Visual Odometry Based on Graph-Assisted Geometric Consistency
    Song, Rujun
    Liu, Jiaqi
    Xiao, Zhuoling
    Yan, Bo
    IEEE Transactions on Intelligent Transportation Systems, 2024, 25 (12): : 20673 - 20682
  • [48] Self-supervised Depth Estimation in Laparoscopic Image Using 3D Geometric Consistency
    Huang, Baoru
    Zheng, Jian-Qing
    Nguyen, Anh
    Xu, Chi
    Gkouzionis, Ioannis
    Vyas, Kunal
    Tuch, David
    Giannarou, Stamatia
    Elson, Daniel S.
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT VII, 2022, 13437 : 13 - 22
  • [49] Learning monocular visual odometry with dense 3D mapping from dense 3D flow
    Zhao, Cheng
    Sun, Li
    Purkait, Pulak
    Duckett, Tom
    Stolkin, Rustam
    2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2018, : 6864 - 6871
  • [50] CanonPose: Self-Supervised Monocular 3D Human Pose Estimation in the Wild
    Wandt, Bastian
    Rudolph, Marco
    Zell, Petrissa
    Rhodin, Helge
    Rosenhahn, Bodo
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 13289 - 13299