Visual-Inertial Mapping With Non-Linear Factor Recovery

被引:136
|
作者
Usenko, Vladyslav [1 ]
Demmel, Nikolaus [1 ]
Schubert, David [1 ]
Stueckler, Joerg [2 ]
Cremers, Daniel [1 ]
机构
[1] Tech Univ Munich, D-80333 Munich, Germany
[2] MPI Intelligent Syst, D-72076 Tubingen, Germany
来源
关键词
Simultaneous localization and mapping; sensor fusion; KALMAN FILTER;
D O I
10.1109/LRA.2019.2961227
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Cameras and inertial measurement units are complementary sensors for ego-motion estimation and environment mapping. Their combination makes visual-inertial odometry (VIO) systems more accurate and robust. For globally consistent mapping, however, combining visual and inertial information is not straightforward. To estimate the motion and geometry with a set of images large baselines are required. Because of that, most systems operate on keyframes that have large time intervals between each other. Inertial data on the other hand quickly degrades with the duration of the intervals and after several seconds of integration, it typically contains only little useful information. In this letter, we propose to extract relevant information for visual-inertial mapping from visual-inertial odometry using non-linear factor recovery. We reconstruct a set of non-linear factors that make an optimal approximation of the information on the trajectory accumulated by VIO. To obtain a globally consistent map we combine these factors with loop-closing constraints using bundle adjustment. The VIO factors make the roll and pitch angles of the global map observable, and improve the robustness and the accuracy of the mapping. In experiments on a public benchmark, we demonstrate superior performance of our method over the state-of-the-art approaches.
引用
收藏
页码:422 / 429
页数:8
相关论文
共 50 条
  • [41] Distributed Visual-Inertial Cooperative Localization
    Zhu, Pengxiang
    Geneva, Patrick
    Ren, Wei
    Huang, Guoquan
    2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 8714 - 8721
  • [42] Compass aided visual-inertial odometry
    Wang, Yandong
    Zhang, Tao
    Wang, Yuanchao
    Ma, Jingwei
    Li, Yanhui
    Han, Jingzhuang
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2019, 60 : 101 - 115
  • [43] Monocular Visual-Inertial Depth Estimation
    Wofk, Diana
    Ranftl, Rene
    Muller, Matthias
    Koltun, Vladlen
    2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA, 2023, : 6095 - 6101
  • [44] Visual-Inertial Navigation: A Concise Review
    Huang, Guoquan
    2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 9572 - 9582
  • [45] Visual-inertial structural acceleration measurement
    Weng, Yufeng
    Lu, Zheng
    Lu, Xilin
    Spencer, Billie F., Jr.
    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, 2022, 37 (09) : 1146 - 1159
  • [46] CHAMELEON: Visual-inertial indoor navigation
    Rydell, Joakim
    Emilsson, Erika
    2012 IEEE/ION POSITION LOCATION AND NAVIGATION SYMPOSIUM (PLANS), 2012, : 541 - 546
  • [47] A novel visual-inertial Monocular SLAM
    Yue, Xiaofeng
    Zhang, Wenjuan
    Xu, Li
    Liu, JiangGuo
    MIPPR 2017: AUTOMATIC TARGET RECOGNITION AND NAVIGATION, 2018, 10608
  • [48] Visual-Inertial Telepresence for Aerial Manipulation
    Lee, Jongseok
    Balachandran, Ribin
    Sarkisov, Yuri S.
    De Stefano, Marco
    Coelho, Andre
    Shinde, Kashmira
    Kim, Min Jun
    Triebel, Rudolph
    Kondak, Konstantin
    2020 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2020, : 1222 - 1229
  • [49] Information Sparsification in Visual-Inertial Odometry
    Hsiung, Jerry
    Hsiao, Ming
    Westman, Eric
    Valencia, Rafael
    Kaess, Michael
    2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2018, : 1146 - 1153
  • [50] Visual-Inertial Navigation with Guaranteed Convergence
    Di Corato, Francesco
    Innocenti, Mario
    Pollini, Lorenzo
    2013 IEEE WORKSHOP ON ROBOT VISION (WORV), 2013, : 152 - 157