Structural Regularity Aided Visual-Inertial Odometry With Novel Coordinate Alignment and Line Triangulation

被引:13
|
作者
Wei, Hao [1 ,2 ]
Tang, Fulin [2 ]
Xu, Zewen [1 ,2 ]
Wu, Yihong [1 ,2 ]
机构
[1] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100190, Peoples R China
[2] Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
基金
中国国家自然科学基金;
关键词
Visual-Inertial SLAM; SLAM; VANISHING POINT ESTIMATION; MONOCULAR SLAM SYSTEM;
D O I
10.1109/LRA.2022.3194329
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Man-made buildings exhibit structural regularity, which can provide strongly geometrical constraints for Visual-Inertial Odometry (VIO) systems. To make full use of the structural information, we propose a new structural regularity aided VIO with novel coordinate alignment and line triangulation under Manhattan world assumption. The proposed VIO system is built upon OpenVINS [1] and partly based on our previous work [2]. The proposed coordinate alignment method makes the Jacobians and reprojection errors become concise but also makes the required computation number for transformations decrease. In addition, a novel structural line triangulation method is provided, in which the global orientation of a structural line is used to refine its 3D position. All the novelties result in a more accurate and fast VIO system. The system is tested on EuRoC MAV dataset and a self-collected dataset. Experimental results demonstrate that the proposed method obtains better accuracy compared with state-of-the-art (SOTA) point-based systems (VINS-Mono [3] and OpenVINS [1]), point-line-based systems (PL-VINS [4] and Wei et al. [2]), and structural line-based system (StructVIO [5]). Notably, the self-collected dataset is recorded on Manhattan world scenes, and is also full of challenging weak texture and motion blur situations. On the dataset, the accuracy of our method is increased by 40.7% compared with the SOTA point-line-based systems.
引用
收藏
页码:10613 / 10620
页数:8
相关论文
共 50 条
  • [21] A Partial Sparsification Scheme for Visual-Inertial Odometry
    Zhu, Zhikai
    Wang, Wei
    2020 IEEE/ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS (AIM), 2020, : 1983 - 1989
  • [22] Unsupervised Monocular Visual-inertial Odometry Network
    Wei, Peng
    Hua, Guoliang
    Huang, Weibo
    Meng, Fanyang
    Liu, Hong
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 2347 - 2354
  • [23] ADVIO: An Authentic Dataset for Visual-Inertial Odometry
    Cortes, Santiago
    Solin, Arno
    Rahtu, Esa
    Kannala, Juho
    COMPUTER VISION - ECCV 2018, PT X, 2018, 11214 : 425 - 440
  • [24] Direct Visual-Inertial Odometry with Stereo Cameras
    Usenko, Vladyslav
    Engel, Jakob
    Stueckler, Joerg
    Cremers, Daniel
    2016 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2016, : 1885 - 1892
  • [25] RNIN-VIO: Robust Neural Inertial Navigation Aided Visual-Inertial Odometry in Challenging Scenes
    Chen, Danpeng
    Wang, Nan
    Xu, Runsen
    Xie, Weijian
    Bao, Hujun
    Zhang, Guofeng
    2021 IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY (ISMAR 2021), 2021, : 275 - 283
  • [26] The First Attempt of SAR Visual-Inertial Odometry
    Liu, Junbin
    Qiu, Xiaolan
    Ding, Chibiao
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2021, 59 (01): : 287 - 304
  • [27] Monocular Visual-Inertial Odometry for Agricultural Environments
    Song, Kaiyu
    Li, Jingtao
    Qiu, Run
    Yang, Gaidi
    IEEE ACCESS, 2022, 10 : 103975 - 103986
  • [28] ATVIO: ATTENTION GUIDED VISUAL-INERTIAL ODOMETRY
    Liu, Li
    Li, Ge
    Li, Thomas H.
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 4125 - 4129
  • [29] Aerial Visual-Inertial Odometry Performance Evaluation
    Carson, Daniel J.
    Raquet, John F.
    Kauffman, Kyle J.
    PROCEEDINGS OF THE ION 2017 PACIFIC PNT MEETING, 2017, : 137 - 154
  • [30] Pose estimation by Omnidirectional Visual-Inertial Odometry
    Ramezani, Milad
    Khoshelham, Kourosh
    Fraser, Clive
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2018, 105 : 26 - 37