Integrating Point and Line Features for Visual-Inertial Initialization

被引:4
|
作者
Liu, Hong [1 ]
Qiu, Junyin [1 ]
Huang, Weibo [1 ]
机构
[1] Peking Univ, Shenzhen Grad Sch, Key Lab Machine Percept, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
STRUCTURE-FROM-MOTION; ONLINE INITIALIZATION; CALIBRATION;
D O I
10.1109/ICRA46639.2022.9811641
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Accurate and robust initialization is crucial in visual-inertial system, which significantly affects the localization accuracy. Most of the existing feature-based initialization methods rely on point features to estimate initial parameters. However, the performance of these methods often decreases in real scene, as point features are unstable and may be discontinuously observed especially in low textured environments. By contrast, line features, providing richer geometrical information than points, are also very common in man-made buildings. Thereby, in this paper, we propose a novel visual-inertial initialization method integrating both point and line features. Specifically, a closed-form method of line features is presented for initialization, which is combined with point-based method to build an integrated linear system. Parameters including initial velocity, gravity, point depth and line's endpoints depth can be jointly solved out. Furthermore, to refine these parameters, a global optimization method is proposed, which consists of two novel nonlinear least squares problems for respective points and lines. Both gravity magnitude and gyroscope bias are considered in refinement. Extensive experimental results on both simulated and public datasets show that integrating point and line features in initialization stage can achieve higher accuracy and better robustness compared with pure point-based methods.
引用
收藏
页码:9470 / 9476
页数:7
相关论文
共 50 条
  • [21] Leveraging Planar Regularities for Point Line Visual-Inertial Odometry
    Li, Xin
    He, Yijia
    Lin, Jinlong
    Liu, Xiao
    2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2020, : 5120 - 5127
  • [22] Semi-Direct Monocular Visual-Inertial Odometry Using Point and Line Features for IoV
    Jiang, Nan
    Huang, Debin
    Chen, Jing
    Wen, Jie
    Zhang, Heng
    Chen, Honglong
    ACM TRANSACTIONS ON INTERNET TECHNOLOGY, 2022, 22 (01)
  • [23] ImPL-VIO: An Improved Monocular Visual-Inertial Odometry Using Point and Line Features
    Cheng, Haoqi
    Wang, Hong
    Gan, Zhongxue
    Deng, Jinxiang
    INTELLIGENT ROBOTICS AND APPLICATIONS, 2020, 12595 : 217 - 229
  • [24] Stereo visual-inertial localization algorithm for orchard robots based on point-line features
    Xu, Xing
    Liang, Jinming
    Li, Jianying
    Wu, Guang
    Duan, Jieli
    Jin, Mohui
    Fu, Han
    COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2024, 224
  • [25] An Improved Initialization Method for Monocular Visual-Inertial SLAM
    Cheng, Jun
    Zhang, Liyan
    Chen, Qihong
    ELECTRONICS, 2021, 10 (24)
  • [26] Review of visual-inertial navigation system initialization method
    Liu Z.
    Shi D.
    Yang S.
    Li R.
    Guofang Keji Daxue Xuebao/Journal of National University of Defense Technology, 2023, 45 (02): : 15 - 26
  • [27] Learned Monocular Depth Priors in Visual-Inertial Initialization
    Zhou, Yunwen
    Kar, Abhishek
    Turner, Eric
    Kowdle, Adarsh
    Guo, Chao X.
    DuToit, Ryan C.
    Tsotsos, Konstantine
    COMPUTER VISION, ECCV 2022, PT XXII, 2022, 13682 : 552 - 570
  • [28] Renormalization for Initialization of Rolling Shutter Visual-Inertial Odometry
    Branislav Micusik
    Georgios Evangelidis
    International Journal of Computer Vision, 2021, 129 : 2011 - 2027
  • [29] Advancements in Translation Accuracy for Stereo Visual-Inertial Initialization
    Song, Han
    Qu, Zhongche
    Zhang, Zhi
    Ye, Zihan
    Liu, Cong
    2024 9TH ASIA-PACIFIC CONFERENCE ON INTELLIGENT ROBOT SYSTEMS, ACIRS, 2024, : 210 - 215
  • [30] A marker-based method for visual-inertial initialization
    Kang An
    Hao Fan
    Junyu Dong
    Intelligent Marine Technology and Systems, 2 (1):