Unmanned aerial vehicle visual localization method based on deep feature orthorectification matching

被引:0
|
作者
Shang K. [1 ]
Zhao L. [1 ]
Zhang W. [2 ]
Ming L. [1 ]
Liu C. [1 ]
机构
[1] Beijing Institute of Automation and Control Equipment, Beijing
[2] School of Automation, Beijing Institute of Technology, Beijing
关键词
deep learning; matching navigation; satellite denial; unmanned aerial vehicle; visual localization;
D O I
10.13695/j.cnki.12-1222/o3.2024.01.007
中图分类号
学科分类号
摘要
In the context of satellite denial conditions, the foundation for the safe and reliable completion of various tasks by unmanned aerial vehicles (UAVs) is the acquisition of high-precision positioning information. Traditional image matching methods face challenges in guaranteeing security, exhibit poor positioning accuracy, and involve numerous matching constraints. Therefore, a visual positioning method based on deep feature orthorectification matching is proposed, which utilizes a deep learning network to extract depth features from orthorectified UAV aerial images and commercial maps, establishes matching relationships and subsequently calculates high-precision UAV position information. The impact of different factors on visual positioning accuracy is analyzed according to the visual measurement model, and offline experiments ae conducted using a dataset of hollow aerial images. The experimental results demonstrate that, compared with the traditional template matching methods based on histogram of oriented gradients (HOG) features, the proposed method improves positioning accuracy by 25%, and the positioning root mean square error (RMSE) is better than 15 m+0.5%H (for height below 5000 m), which shows certain engineering application value. © 2024 Editorial Department of Journal of Chinese Inertial Technology. All rights reserved.
引用
收藏
页码:52 / 57and106
相关论文
共 11 条
  • [1] Li L, Wang Y, Gui X, Et al., Visual-inertial positioning method based on priori feature map matching constraints, Journal of Chinese Inertial Technology, 30, pp. 44-50, (2022)
  • [2] Wu C., GNSS-denied UAV Visual Navigation Research, (2021)
  • [3] Shang K, Zheng X, Wang L, Et al., Pose ambiguity correction algorithm for UAV mobile platform landing, Journal of Chinese Inertial Technology, 28, pp. 462-468, (2020)
  • [4] Zhang X, Zheng L, Tan Z, Et al., Visual localization method based on feature coding and dynamic routing optimization, Journal of Chinese Inertial Technology, 30, pp. 451-460, (2022)
  • [5] Dai M, Chen J, Lu Y, Et al., Finding point with image: an end-to-end benchmark for vision-based UAV localization[M], (2022)
  • [6] Couturier A, Akhloufi M A., A review on absolute visual localization for UAV, Robotics and Autonomous Systems, 135, (2021)
  • [7] Patel B, Barfoot T D, Schoellig A P., Visual localization with google earth images for robust global pose estimation of UAVs, 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 6491-6497, (2020)
  • [8] Shan M, Wang F, Lin F, Et al., Google map aided visual navigation for UAVs in GPS-denied Environment, 2015 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 114-119, (2015)
  • [9] Detone D, Malisiewicz T, Rabinovich A., SuperPoint: Self-supervised interest point detection and description, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 337-33712, (2018)
  • [10] Sarlin P E, Detone D, Malisiewicz T, Et al., SuperGlue: Learning feature matching with graph neural networks, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4937-4946, (2020)