Driven to Distraction: Self-Supervised Distractor Learning for Robust Monocular Visual Odometry in Urban Environments

被引:0
|
作者
Barnes, Dan [1 ]
Maddern, Will [1 ]
Pascoe, Geoffrey [1 ]
Posner, Ingmar [1 ]
机构
[1] Univ Oxford, Dept Engn Sci, Oxford Robot Inst, Oxford, England
基金
英国工程与自然科学研究理事会;
关键词
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We present a self-supervised approach to ignoring "distractors" in camera images for the purposes of robustly estimating vehicle motion in cluttered urban environments. We leverage offline multi-session mapping approaches to automatically generate a per-pixel ephemerality mask and depth map for each input image, which we use to train a deep convolutional network. At run-time we use the predicted ephemerality and depth as an input to a monocular visual odometry (VO) pipeline, using either sparse features or dense photometric matching. Our approach yields metric-scale VO using only a single camera and can recover the correct egomotion even when 90% of the image is obscured by dynamic, independently moving objects. We evaluate our robust VO methods on more than 400km of driving from the Oxford RobotCar Dataset and demonstrate reduced odometry drift and significantly improved egomotion estimation in the presence of large moving vehicles in urban traffic.
引用
收藏
页码:1894 / 1900
页数:7
相关论文
共 50 条
  • [31] Higher accuracy self-supervised visual odometry with reliable projection
    Shi Zhou
    Zijun Yang
    Miaomiao Zhu
    He Li
    Seiichi Serikawa
    Mitsunori Mizumachi
    Lifeng Zhang
    Artificial Life and Robotics, 2022, 27 : 568 - 575
  • [32] Self-supervised Visual-LiDAR Odometry with Flip Consistency
    Li, Bin
    Hu, Mu
    Wang, Shuling
    Wang, Lianghao
    Gong, Xiaojin
    2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WACV 2021, 2021, : 3843 - 3851
  • [33] Higher accuracy self-supervised visual odometry with reliable projection
    Zhou, Shi
    Yang, Zijun
    Zhu, Miaomiao
    Li, He
    Serikawa, Seiichi
    Mizumachi, Mitsunori
    Zhang, Lifeng
    ARTIFICIAL LIFE AND ROBOTICS, 2022, 27 (03) : 568 - 575
  • [34] Robust visual odometry for vehicle localization in urban environments
    Parra, I.
    Sotelo, M. A.
    Llorca, D. F.
    Ocana, M.
    ROBOTICA, 2010, 28 : 441 - 452
  • [35] Enhanced blur-robust monocular depth estimation via self-supervised learning
    Sung, Chi-Hun
    Kim, Seong-Yeol
    Shin, Ho-Ju
    Lee, Se-Ho
    Kim, Seung-Wook
    ELECTRONICS LETTERS, 2024, 60 (22)
  • [36] Image Masking for Robust Self-Supervised Monocular Depth Estimation
    Chawla, Hemang
    Jeeveswaran, Kishaan
    Arani, Elahe
    Zonooz, Bahram
    2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2023), 2023, : 10054 - 10060
  • [37] MULTI-TASK SELF-SUPERVISED VISUAL REPRESENTATION LEARNING FOR MONOCULAR ROAD SEGMENTATION
    Cho, Jaehoon
    Kim, Youngjung
    Jung, Hyungjoo
    Oh, Changjae
    Youn, Jaesung
    Sohn, Kwanghoon
    2018 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2018,
  • [38] MonoVAN: Visual Attention for Self-Supervised Monocular Depth Estimation
    Indyk, Ilia
    Makarov, Ilya
    2023 IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY, ISMAR, 2023, : 1211 - 1220
  • [39] Robust stereo inertial odometry based on self-supervised feature points
    Li, Guangqiang
    Hou, Junyi
    Chen, Zhong
    Yu, Lei
    Fei, Shumin
    APPLIED INTELLIGENCE, 2023, 53 (06) : 7093 - 7107
  • [40] Robust stereo inertial odometry based on self-supervised feature points
    Guangqiang Li
    Junyi Hou
    Zhong Chen
    Lei Yu
    Shumin Fei
    Applied Intelligence, 2023, 53 : 7093 - 7107