Unsupervised learning of depth and ego-motion with absolutely global scale recovery from visual and inertial data sequences

被引:0
|
作者
Meng Y. [1 ]
Sun Q. [1 ]
Zhang C. [1 ]
Tang Y. [1 ]
机构
[1] Key Laboratory of Advanced Control and Optimization for Chemical Processes of Ministry of Education, East China University of Science and Technology, Shanghai
基金
中国国家自然科学基金;
关键词
BiLSTM; depth; ego-motion; Monocular; scale recovery;
D O I
10.1080/23335777.2020.1811386
中图分类号
学科分类号
摘要
In this paper, we propose an unsupervised learning method for jointly estimating monocular depth and ego-motion, which is capable to recover the absolute scale of global camera trajectory. In order to solve the general problems of scale drift and scale ambiguity of monocular camera, we fuse geometric movement data from inertial measurement unit (IMU), and use Bi-directional Long Short-Term Memory (BiLSTM) to extract temporal features. Besides, we add a lightweight and efficient attention mechanism, Convolutional Block Attention Module (CBAM), to Convolutional Neural Networks (CNNs) to complete the extraction of image features. Considering the scenes with severe illumination changes, ambiguous structures, moving objects and occlusions, especially scenes with progressively-variant textures, the geometric features can provide adaptive estimation results in the case of the degeneration of visual features. Experiments on the KITTI driving dataset reveal that our scheme achieves promising results in the estimation of camera pose and depth. Moreover, the absolute scale recovery for the global camera trajectory is effective. © 2020 Informa UK Limited, trading as Taylor & Francis Group.
引用
收藏
页码:133 / 158
页数:25
相关论文
共 26 条
  • [1] Unsupervised Learning of Depth and Ego-Motion from Video
    Zhou, Tinghui
    Brown, Matthew
    Snavely, Noah
    Lowe, David G.
    30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 6612 - +
  • [2] Unsupervised Visual Ego-motion Learning for Robots
    Khalilbayli, Fidan
    Bayram, Baris
    Ince, Gokhan
    2019 4TH INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND ENGINEERING (UBMK), 2019, : 676 - 681
  • [3] Unsupervised Scale-consistent Depth and Ego-motion Learning from Monocular Video
    Bian, Jia-Wang
    Li, Zhichao
    Wang, Naiyan
    Zhan, Huangying
    Shen, Chunhua
    Cheng, Ming-Ming
    Reid, Ian
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [4] Unsupervised Learning of Depth and Ego-Motion from Continuous Monocular Images
    Wang, Zhuo
    Huang, Min
    Huang, Xiao-Long
    Ma, Fei
    Dou, Jia-Ming
    Lyu, Jian-Li
    Journal of Computers (Taiwan), 2021, 32 (06) : 38 - 51
  • [5] Unsupervised Learning of Depth and Ego-Motion from Cylindrical Panoramic Video
    Sharma, Alisha
    Ventura, Jonathan
    2019 IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND VIRTUAL REALITY (AIVR), 2019, : 58 - 65
  • [6] Unsupervised monocular depth and ego-motion learning with structure and semantics
    Casser, Vincent
    Pirk, Soeren
    Mahjourian, Reza
    Angelova, Anelia
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2019), 2019, : 381 - 388
  • [7] Unsupervised Learning of Monocular Depth and Ego-Motion Using Multiple Masks
    Wang, Guangming
    Wang, Hesheng
    Liu, Yiling
    Chen, Weidong
    2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 4724 - 4730
  • [8] Unsupervised Learning of Monocular Depth and Ego-Motion using Conditional PatchGANs
    Vankadari, Madhu
    Kumar, Swagat
    Majumder, Anima
    Das, Kaushik
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 5677 - 5684
  • [9] Unsupervised Learning of Monocular Depth and Ego-Motion in Outdoor/Indoor Environments
    Gao, Ruipeng
    Xiao, Xuan
    Xing, Weiwei
    Li, Chi
    Liu, Lei
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (17) : 16247 - 16258
  • [10] Unsupervised Deep Learning of Depth, Ego-Motion, and Optical Flow from Stereo Images
    Yang, Delong
    Luo, Zhaohui
    Shang, Peng
    Hu, Zhigang
    2021 9TH INTERNATIONAL CONFERENCE ON TRAFFIC AND LOGISTIC ENGINEERING (ICTLE), 2021, : 51 - 56