Cubic B-Spline-Based Feature Tracking for Visual-Inertial Odometry With Event Camera

被引:0
|
作者
Liu, Xinghua [1 ]
Xue, Hanjun [1 ]
Gao, Xiang [1 ]
Liu, Han [2 ]
Chen, Badong [3 ]
Ge, Shuzhi Sam [4 ]
机构
[1] Xian Univ Technol, Sch Elect Engn, Xian 710048, Peoples R China
[2] Xian Univ Technol, Sch Automat & Informat Engn, Xian 710048, Peoples R China
[3] Xi An Jiao Tong Univ, Sch Elect & Informat Engn, Xian 710049, Peoples R China
[4] Natl Univ Singapore, Sch Elect & Comp Engn, Singapore 117583, Singapore
基金
中国国家自然科学基金;
关键词
Cubic B-spline; dynamic and active-pixel vision sensor (DAVIS) camera; inertial measurement unit (IMU) data; trajectory estimation; visual-inertial odometry (VIO); OBSERVABILITY ANALYSIS; ROBUST; IMU; VERSATILE; SLAM;
D O I
10.1109/TIM.2023.3325508
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
It is challenging to obtain accurate trajectories with standard camera visual odometry (VO) in environments with weak textures and light variations. This article introduces a novel approach [cubic B-spline-based visual-inertial odometry (CB-VIO)], using the dynamic and active-pixel vision sensor (DAVIS) camera. In the proposed CB-VIO method, the matching mechanism between images and events is designed to improve the success rate of event tracking, based on which the template points from events are utilized to construct a cubic B-spline based event tracking model within a continuous spatiotemporal window [under SE(3)]. Based on the tracking model to interpolate poses at any time point, the inertial measurement unit (IMU) measurement model is constructed to achieve data fusion from asynchronous and synchronous sensors with different rates. Compared with the Spline-visual-inertial odometry (VIO) and the event-based VO (EVO), the proposed continuous spatiotemporal window method can effectively solve the data association for EVO and the continuous-time trajectory with fixed-time intervals for Spline-VIO. The experimental results are compared on public datasets of multiple different scenes, which demonstrate the superior performance of CB-VIO in terms of accuracy and robustness (translation error <= 1.3% and rotation error <= 2(degrees)).
引用
收藏
页数:15
相关论文
共 50 条
  • [41] Semi-Direct Monocular Visual Odometry Based on Visual-Inertial Fusion
    Gong Z.
    Zhang X.
    Peng X.
    Li X.
    Zhang, Xiaoli (zhxl@xmu.edu.cn), 1600, Chinese Academy of Sciences (42): : 595 - 605
  • [42] REVIO: Range- and Event-Based Visual-Inertial Odometry for Bio-Inspired Sensors
    Wang, Yingxun
    Shao, Bo
    Zhang, Chongchong
    Zhao, Jiang
    Cai, Zhihao
    BIOMIMETICS, 2022, 7 (04)
  • [43] Visual-inertial object tracking: Incorporating camera pose into motion models
    Shahbazi, Mohammad
    Mirtajadini, Seyed Hojat
    Fahimi, Hamidreza
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 229
  • [44] DFF-VIO: A General Dynamic Feature Fused Monocular Visual-Inertial Odometry
    Luo, Nan
    Hu, Zhexuan
    Ding, Yuan
    Li, Jiaxu
    Zhao, Hui
    Liu, Gang
    Wang, Quan
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2025, 35 (02) : 1758 - 1773
  • [45] Joint optimization based on direct sparse stereo visual-inertial odometry
    Shuhuan Wen
    Yanfang Zhao
    Hong Zhang
    Hak Keung Lam
    Luigi Manfredi
    Autonomous Robots, 2020, 44 : 791 - 809
  • [46] Robust Visual-Inertial Odometry Based on a Kalman Filter and Factor Graph
    Wang, Zhiwei
    Pang, Bao
    Song, Yong
    Yuan, Xianfeng
    Xu, Qingyang
    Li, Yibin
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (07) : 7048 - 7060
  • [47] Monocular Visual-Inertial Odometry Based on Local Maximum A Posteriori Estimation
    Ye, Bipeng
    Gong, Guanghong
    Li, Ni
    Gao, Yunbo
    Zhang, Tiantian
    Hou, Guoqing
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2025, 10 (03): : 2782 - 2789
  • [48] The YTU dataset and recurrent neural network based visual-inertial odometry
    Gurturk, Mert
    Yusefi, Abdullah
    Aslan, Muhammet Fatih
    Soycan, Metin
    Durdu, Akif
    Masiero, Andrea
    MEASUREMENT, 2021, 184
  • [49] Joint optimization based on direct sparse stereo visual-inertial odometry
    Wen, Shuhuan
    Zhao, Yanfang
    Zhang, Hong
    Lam, Hak Keung
    Manfredi, Luigi
    AUTONOMOUS ROBOTS, 2020, 44 (05) : 791 - 809
  • [50] Keyframe-based visual-inertial odometry using nonlinear optimization
    Leutenegger, Stefan
    Lynen, Simon
    Bosse, Michael
    Siegwart, Roland
    Furgale, Paul
    INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2015, 34 (03): : 314 - 334