A high precision indoor positioning and attitude determination method based on visual two-dimensional code/inertial information

被引:0
|
作者
Niu X. [1 ]
Wang T. [1 ]
Ge W. [1 ]
Kuang J. [1 ]
机构
[1] GNSS Research Center, Wuhan Uniersity, Wuhan
关键词
ArUco code; indoor high-precision positioning; robustness; visual positioning;
D O I
10.13695/j.cnki.12-1222/o3.2023.11.002
中图分类号
学科分类号
摘要
The current positioning and attitude determination method based on visual two-dimensional code relies on continuous and clear images. When the images are lost or blurred, the performance of the method is difficult to guarantee. To solve the problem, a high precision indoor positioning and attitude determination method based on visual two-dimensional code/inertial information is proposed, which has the characteristics of low cost, flexibility and portability. The proposed method takes the strap-down inertial navigation algorithm as the core, obtains high-precision position updating information by shooting pre-arranged two-dimensional codes (ArUco codes) with known coordinates, and uses an extended Kalman filter for tightly coupled fusion. At the same time, the observation information is fused by using the reverse smoothing algorithm, which can effectively deal with the problems of short occlusion or image blur of the camera and improve the robustness of the system. Experimental results show that the positioning accuracy of the proposed scheme can reach 3.2 cm (RMS) continuously and reliably when the visual observation is interrupted, and the attitude difference between the proposed scheme and the reference system is less than 0.1 ◦ (RMS). © 2023 Editorial Department of Journal of Chinese Inertial Technology. All rights reserved.
引用
收藏
页码:1067 / 1075
页数:8
相关论文
共 20 条
  • [1] Antsfeld L, Chidlovskii B, Sansano-Sansano E., Deep Smartphone Sensors-WiFi Fusion for Indoor Positioning and Tracking, (2020)
  • [2] Shule W, Almansa C M, Queralta J P, Et al., UWB-based localization for multi-UAV systems and collaborative heterogeneous multi-robot systems: a survey, (2020)
  • [3] Jiang P, Chen L, Guo H, Et al., Novel indoor positioning algorithm based on Lidar/inertial measurement unit integrated system, International Journal of Advanced Robotic Systems, 2021, 2
  • [4] Chen Y, Chen R, Liu M, Et al., Indoor visual positioning aided by CNN-based image retrieval: training-free, 3D modeling-free, Sensors, 18, 8, (2018)
  • [5] Campos C, Elvira R, Rodriguez J, Et al., ORB-SLAM3: an accurate open-source library for visual, visual-inertial and multi-map SLAM, (2020)
  • [6] Geneva P, Eckenhoff K, Lee W, Et al., OpenVINS: A research platform for visual-inertial estimation, 2020, 2020 IEEE International Conference on Robotics and Automation (ICRA), (2020)
  • [7] Zhang X, Zheng L, Tan Z, Et al., Visual localization method based on feature coding and dynamic routing optimization, Journal of Chinese Inertial Technology, 30, pp. 451-460, (2022)
  • [8] Li L, Wang Y, Gui X, Et al., Visual-inertial positioning method based on priori feature map matching constraints, Journal of Chinese Inertial Technology, 30, pp. 44-50, (2022)
  • [9] Furtado J S, Liu H, Lai G, Et al., Comparative analysis of OptiTrack motion capture systems, 2018, Advances in Motion Sensing and Control for Robotic Applications, (2018)
  • [10] Wang H, Lian J., Accuracy test of NOKOV infrared optical motion capture system, Modern Information Technology, 5, 15, pp. 113-115, (2021)