A Robust and Efficient Visual-Inertial Initialization With Probabilistic Normal Epipolar Constraint

被引:0
|
作者
Mu, Changshi [1 ]
Feng, Daquan [1 ]
Zheng, Qi [1 ]
Zhuang, Yuan [2 ]
机构
[1] Shenzhen Univ, Coll Elect & Informat Engn, Shenzhen Key Lab Digital Creat Technol, Guangdong Hong Kong Joint Lab Big Data Imaging & C, Shenzhen 518060, Peoples R China
[2] Wuhan Univ, State Key Lab Informat Engn Surveying Mapping & Re, Wuhan 430072, Peoples R China
来源
IEEE ROBOTICS AND AUTOMATION LETTERS | 2025年 / 10卷 / 04期
基金
国家重点研发计划;
关键词
Gyroscopes; Gravity; Cameras; Accuracy; Vectors; Visualization; Indexes; Translation; Simultaneous localization and mapping; Estimation; Visual-inertial SLAM; sensor fusion;
D O I
10.1109/LRA.2025.3544522
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Accurate and robust initialization is essential for Visual-Inertial Odometry (VIO), as poor initialization can severely degrade pose accuracy. During initialization, it is crucial to estimate parameters such as accelerometer bias, gyroscope bias, initial velocity, gravity, etc. Most existing VIO initialization methods adopt Structure from Motion (SfM) to solve for gyroscope bias. However, SfM is not stable and efficient enough in fast-motion or degenerate scenes. To overcome these limitations, we extended the rotation-translation-decoupled framework by adding new uncertainty parameters and optimization modules. First, we adopt a gyroscope bias estimator that incorporates probabilistic normal epipolar constraints. Second, we fuse IMU and visual measurements to solve for velocity, gravity, and scale efficiently. Finally, we design an additional refinement module that effectively reduces gravity and scale errors. Extensive EuRoC dataset tests show that our method reduces gyroscope bias and rotation errors by 16% and 4% on average, and gravity error by 29% on average. On the TUM dataset, our method reduces the gravity error and scale error by 14.2% and 5.7% on average respectively.
引用
收藏
页码:3590 / 3597
页数:8
相关论文
共 50 条
  • [11] Learned Monocular Depth Priors in Visual-Inertial Initialization
    Zhou, Yunwen
    Kar, Abhishek
    Turner, Eric
    Kowdle, Adarsh
    Guo, Chao X.
    DuToit, Ryan C.
    Tsotsos, Konstantine
    COMPUTER VISION, ECCV 2022, PT XXII, 2022, 13682 : 552 - 570
  • [12] Renormalization for Initialization of Rolling Shutter Visual-Inertial Odometry
    Branislav Micusik
    Georgios Evangelidis
    International Journal of Computer Vision, 2021, 129 : 2011 - 2027
  • [13] Advancements in Translation Accuracy for Stereo Visual-Inertial Initialization
    Song, Han
    Qu, Zhongche
    Zhang, Zhi
    Ye, Zihan
    Liu, Cong
    2024 9TH ASIA-PACIFIC CONFERENCE ON INTELLIGENT ROBOT SYSTEMS, ACIRS, 2024, : 210 - 215
  • [14] Integrating Point and Line Features for Visual-Inertial Initialization
    Liu, Hong
    Qiu, Junyin
    Huang, Weibo
    2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2022, 2022, : 9470 - 9476
  • [15] A marker-based method for visual-inertial initialization
    Kang An
    Hao Fan
    Junyu Dong
    Intelligent Marine Technology and Systems, 2 (1):
  • [16] Accurate Initialization Method for Monocular Visual-Inertial SLAM
    Amrani, Ahderraouf
    Wang, Hesheng
    2019 3RD INTERNATIONAL SYMPOSIUM ON AUTONOMOUS SYSTEMS (ISAS 2019), 2019, : 159 - 164
  • [17] Renormalization for Initialization of Rolling Shutter Visual-Inertial Odometry
    Micusik, Branislav
    Evangelidis, Georgios
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2021, 129 (06) : 2011 - 2027
  • [18] A Robust and Efficient Visual-Inertial SLAM for Vision-Degraded Environments
    Zhao, Xuhui
    Gao, Zhi
    Wang, Jialiang
    Lin, Zhipeng
    Zhou, Zhiyu
    Huang, Yue
    2024 IEEE 18TH INTERNATIONAL CONFERENCE ON CONTROL & AUTOMATION, ICCA 2024, 2024, : 981 - 987
  • [19] Fast Monocular Visual-Inertial Initialization with an Improved Iterative Strategy
    Cheng, Jun
    Zhang, Liyan
    Chen, Qihong
    JOURNAL OF SENSORS, 2021, 2021
  • [20] Monocular Visual-Inertial SLAM: Continuous Preintegration and Reliable Initialization
    Liu, Yi
    Chen, Zhong
    Zheng, Wenjuan
    Wang, Hao
    Liu, Jianguo
    SENSORS, 2017, 17 (11)