Incorporating Learned Depth Perception Into Monocular Visual Odometry to Improve Scale Recovery

被引:0
|
作者
Mailka, Hamza [1 ]
Abouzahir, Mohamed [1 ]
Ramzi, Mustapha [1 ]
机构
[1] Mohammed V Univ Rabat, High Sch Technol Sale Lab Syst Anal Informat Proc, Rabat, Morocco
关键词
-Visual odometry; scale recovery; depth estimation; DPT model; SLAM; VERSATILE; ROBUST;
D O I
10.14569/IJACSA.2023.01408115
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
growing interest in autonomous driving has led to a comprehensive study of visual odometry (VO). It has been well studied how VO can estimate the pose of moving objects by examining the images taken from onboard cameras. In the last decade, it has been proposed that deep learning under supervision can be employed to estimate depth maps and visual odometry (VO). In this paper, we propose a DPT (Dense Prediction Transformer)-based monocular visual odometry method for scale estimation. Scale-drift problems are common in traditional monocular systems and in recent deep learning studies. In order to recover the scale, it is imperative that depth estimation to be accurate. A framework for dense prediction challenges that bases its computation on vision transformers instead of convolutional networks is characterized as an accurate model that is utilized to estimate depth maps. Scale recovery and depth refinement are accomplished iteratively. This allows our approach to simultaneously increase the depth estimate while eradicating scale drift. The depth map estimated using the DPT model is accurate enough for the purpose of achieving the best efficiency possible on a VO benchmark, eliminating the scaling drift issue.
引用
收藏
页码:1060 / 1068
页数:9
相关论文
共 50 条
  • [31] Transformer-Based Self-Supervised Monocular Depth and Visual Odometry
    Zhao, Hongru
    Qiao, Xiuquan
    Ma, Yi
    Tafazolli, Rahim
    IEEE SENSORS JOURNAL, 2023, 23 (02) : 1436 - 1446
  • [32] Unsupervised Deep Persistent Monocular Visual Odometry and Depth Estimation in Extreme Environments
    Almalioglu, Yasin
    Santamaria-Navarro, Angel
    Morrell, Benjamin
    Agha-mohammadi, Ali-akbar
    2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 3534 - 3541
  • [33] Enhancing self-supervised monocular depth estimation with traditional visual odometry
    Andraghetti, Lorenzo
    Myriokefalitakis, Panteleimon
    Dovesi, Pier Luigi
    Luque, Belen
    Poggi, Matteo
    Pieropan, Alessandro
    Mattoccia, Stefano
    2019 INTERNATIONAL CONFERENCE ON 3D VISION (3DV 2019), 2019, : 424 - 433
  • [34] A self-supervised monocular odometry with visual-inertial and depth representations
    Zhao, Lingzhe
    Xiang, Tianyu
    Wang, Zhuping
    JOURNAL OF THE FRANKLIN INSTITUTE-ENGINEERING AND APPLIED MATHEMATICS, 2024, 361 (06):
  • [35] Monocular Visual Odometry Based on Depth and Optical Flow Using Deep Learning
    Ban, Xicheng
    Wang, Hongjian
    Chen, Tao
    Wang, Ying
    Xiao, Yao
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2021, 70
  • [36] Outdoor Monocular Visual Odometry Enhancement Using Depth Map and Semantic Segmentation
    Kim, Jee-Seong
    Kim, Chul-Hong
    Shin, Yong-Min
    Cho, Il-Soo
    Cho, Dong-Il Dan
    2020 20TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS (ICCAS), 2020, : 1040 - 1045
  • [37] Learned Monocular Depth Priors in Visual-Inertial Initialization
    Zhou, Yunwen
    Kar, Abhishek
    Turner, Eric
    Kowdle, Adarsh
    Guo, Chao X.
    DuToit, Ryan C.
    Tsotsos, Konstantine
    COMPUTER VISION, ECCV 2022, PT XXII, 2022, 13682 : 552 - 570
  • [38] An Unsupervised Monocular Visual Odometry Based on Multi-Scale Modeling
    Zhi, Henghui
    Yin, Chenyang
    Li, Huibin
    Pang, Shanmin
    SENSORS, 2022, 22 (14)
  • [39] Extending Monocular Visual Odometry to Stereo Camera Systems by Scale Optimization
    Mo, Jiawei
    Sattar, Junaed
    2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 6921 - 6927
  • [40] Monocular Semidirect Visual Odometry for Large-Scale Outdoor Localization
    Qi Naixin
    Yang Xiaogang
    Li Chuanxiang
    Li Xiaofeng
    Zhang Shengxiu
    Cao Lijia
    IEEE ACCESS, 2019, 7 : 57927 - 57942