VI-SLAM algorithm with camera-IMU extrinsic automatic calibration and online estimation

被引:0
|
作者
Pan L. [1 ]
Tian F. [1 ]
Ying W. [1 ]
Liang W. [1 ]
She B. [1 ]
机构
[1] Naval University of Engineering, Wuhan
关键词
Extrinsic calibration; Initialization; Sensor fusion; Simultaneous localization and mapping; State estimation;
D O I
10.19650/j.cnki.cjsi.J1904954
中图分类号
学科分类号
摘要
The visual-inertial simultaneous location and mapping (VI-SLAM) is mainly based on visual and inertial navigation information fusion. It is a tedious work to calibrate the camera-IMU extrinsic parameter offline. The tracking accuracy is affected when the mechanical configuration of the sensor suite changes slightly due to the impact or equipment adjustment. To solve this problem, one kind of VI-SLAM algorithm with automatic calibration and online estimation of the camera-IMU extrinsic parameters is proposed. In the algorithm, the first step is to estimate the camera-IMU extrinsic rotation with the hand-eye calibration and the gyroscope bias. Secondly, the scale factor, gravity and camera-IMU extrinsic translation are estimated without considering the accelerometer bias. Thirdly, these parameters are updated with the gravitational magnitude and accelerometer bias. Finally, the camera-IMU extrinsic parameters are put into the state vectors for online estimation. Experimental results using the EuRoC datasets show that the algorithm can automatically calibrate and estimate the camera-IMU extrinsic parameters. The errors of extrinsic orientation and the translation are within 0.5 degree and 0.02 meter, respectively. This can help improve the rapid utilization and accuracy of the VI-SLAM system. © 2019, Science Press. All right reserved.
引用
收藏
页码:56 / 67
页数:11
相关论文
共 28 条
  • [1] Cadena C., Carlone L., Carrillo H., Et al., Past, present, and future of simultaneous localization and mapping: toward the robust perception age, IEEE Transactions on Robotics, 32, 6, pp. 1309-1332, (2016)
  • [2] Lin Y., Gao F., Qin T., Et al., Autonomous aerial navigation using monocular visual-inertial fusion, Journal of Field Robotics, 35, 1, pp. 23-51, (2018)
  • [3] Pizzoli M., Forster C., Scaramuzza D., REMODE: Probabilistic, monocular dense reconstruction in real time, IEEE International Conference on Robotics and Automation, pp. 2609-2616, (2014)
  • [4] Oskiper T., Samarasekera S., Kumar R., Multi-sensor navigation algorithm using monocular camera, IMU and GPS for large scale augmented reality, IEEE International Symposium on Mixed and Augmented Reality, pp. 71-80, (2012)
  • [5] Li M.Y., Mourikis A., Improving the accuracy of EKF-based visual-inertial odometry, IEEE International Conference on Robotics and Automation, pp. 828-835, (2012)
  • [6] Tanskanen P., Naegeli T., Pollefeys M., Et al., Semi-direct EKF-based monocular visual-inertial odometry, IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 6073-6078, (2015)
  • [7] Leutenegger S., Lynen S., Bosse M., Et al., Keyframe-based visual-inertial odometry using nonlinear optimization, International Journal of Robotics Research, 34, 3, pp. 314-334, (2015)
  • [8] Yao E.L., Zhang H.X., Zhang G.L., Et al., Robot simultaneous localization and mapping algorithm based on vision and IMU, Chinese Journal of Scientific Instrument, 39, 4, pp. 230-238, (2018)
  • [9] Scaramuzza D., Achtelik M., Doitsidis L., Et al., Vision-controlled micro flying robots: from system design to autonomous navigation and mapping in GPS-denied environments, IEEE Robotics & Automation Magazine, 21, 3, pp. 26-40, (2014)
  • [10] Hesch J.A., Kottas D.G., Bowman S.L., Et al., Camera-IMU-based localization: Observability analysis and consistency improvement, The International Journal of Robotics Research, 33, 1, pp. 182-201, (2014)