Incorporating Learned Depth Perception Into Monocular Visual Odometry to Improve Scale Recovery

被引:0
|
作者
Mailka, Hamza [1 ]
Abouzahir, Mohamed [1 ]
Ramzi, Mustapha [1 ]
机构
[1] Mohammed V Univ Rabat, High Sch Technol Sale Lab Syst Anal Informat Proc, Rabat, Morocco
关键词
-Visual odometry; scale recovery; depth estimation; DPT model; SLAM; VERSATILE; ROBUST;
D O I
10.14569/IJACSA.2023.01408115
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
growing interest in autonomous driving has led to a comprehensive study of visual odometry (VO). It has been well studied how VO can estimate the pose of moving objects by examining the images taken from onboard cameras. In the last decade, it has been proposed that deep learning under supervision can be employed to estimate depth maps and visual odometry (VO). In this paper, we propose a DPT (Dense Prediction Transformer)-based monocular visual odometry method for scale estimation. Scale-drift problems are common in traditional monocular systems and in recent deep learning studies. In order to recover the scale, it is imperative that depth estimation to be accurate. A framework for dense prediction challenges that bases its computation on vision transformers instead of convolutional networks is characterized as an accurate model that is utilized to estimate depth maps. Scale recovery and depth refinement are accomplished iteratively. This allows our approach to simultaneously increase the depth estimate while eradicating scale drift. The depth map estimated using the DPT model is accurate enough for the purpose of achieving the best efficiency possible on a VO benchmark, eliminating the scaling drift issue.
引用
收藏
页码:1060 / 1068
页数:9
相关论文
共 50 条
  • [1] Improving Monocular Visual Odometry Using Learned Depth
    Sun, Libo
    Yin, Wei
    Xie, Enze
    Li, Zhengrong
    Sun, Changming
    Shen, Chunhua
    IEEE TRANSACTIONS ON ROBOTICS, 2022, 38 (05) : 3173 - 3186
  • [2] Unsupervised Scale Network for Monocular Relative Depth and Visual Odometry
    Wang, Zhongyi
    Chen, Qijun
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2024, 73
  • [3] Scale Recovery for Monocular Visual Odometry Using Depth Estimated with Deep Convolutional Neural Fields
    Yin, Xiaochuan
    Wang, Xiangwei
    Du, Xiaoguo
    Chen, Qijun
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 5871 - 5879
  • [4] Monocular Visual Odometry Scale Recovery using Geometrical Constraint
    Wang, Xiangwei
    Zhang, Hui
    Yin, Xiaochuan
    Du, Mingxiao
    Chen, Qijun
    2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2018, : 988 - 995
  • [5] Monocular depth recovery method assisting visual odometry initialization in corridor environment
    Xu, Xiaosu
    Liu, Yehao
    Yao, Yiqing
    Xia, Ruoyan
    Wang, Zijian
    Fan, Mingze
    Zhongguo Guanxing Jishu Xuebao/Journal of Chinese Inertial Technology, 32 (08): : 753 - 761
  • [6] Depth Prediction for Monocular Direct Visual Odometry
    Cheng, Ran
    Agia, Christopher
    Meger, David
    Dudek, Gregory
    2020 17TH CONFERENCE ON COMPUTER AND ROBOT VISION (CRV 2020), 2020, : 70 - 77
  • [7] Monocular Visual Odometry using Learned Repeatability and Description
    Huang, Huaiyang
    Ye, Haoyang
    Sun, Yuxiang
    Liu, Ming
    2020 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2020, : 8913 - 8919
  • [8] Accurate and Robust Scale Recovery for Monocular Visual Odometry Based on Plane Geometry
    Tian, Rui
    Zhang, Yunzhou
    Zhu, Delong
    Liang, Shiwen
    Coleman, Sonya
    Kerr, Dermot
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 5296 - 5302
  • [9] Resolving Scale Ambiguity for Monocular Visual Odometry
    Choi, Sunglok
    Park, Jaehyun
    Yu, Wonpil
    2013 10TH INTERNATIONAL CONFERENCE ON UBIQUITOUS ROBOTS AND AMBIENT INTELLIGENCE (URAI), 2013, : 604 - 608
  • [10] Multimodal Scale Estimation for Monocular Visual Odometry
    Fanani, Nolang
    Stuerck, Alina
    Barnada, Marc
    Mester, Rudolf
    2017 28TH IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV 2017), 2017, : 1714 - 1721