On Robust Cross-view Consistency in Self-supervised Monocular Depth Estimation

被引:0
|
作者
Haimei Zhao
Jing Zhang
Zhuo Chen
Bo Yuan
Dacheng Tao
机构
[1] University of Sydney,School of Computer Science
[2] Tsinghua University,Shenzhen International Graduate School
[3] University of Queensland,School of Information Technology & Electrical Engineering
来源
关键词
3D vision; depth estimation; cross-view consistency; self-supervised learning; monocular perception;
D O I
暂无
中图分类号
学科分类号
摘要
Remarkable progress has been made in self-supervised monocular depth estimation (SS-MDE) by exploring cross-view consistency, e.g., photometric consistency and 3D point cloud consistency. However, they are very vulnerable to illumination variance, occlusions, texture-less regions, as well as moving objects, making them not robust enough to deal with various scenes. To address this challenge, we study two kinds of robust cross-view consistency in this paper. Firstly, the spatial offset field between adjacent frames is obtained by reconstructing the reference frame from its neighbors via deformable alignment, which is used to align the temporal depth features via a depth feature alignment (DFA) loss. Secondly, the 3D point clouds of each reference frame and its nearby frames are calculated and transformed into voxel space, where the point density in each voxel is calculated and aligned via a voxel density alignment (VDA) loss. In this way, we exploit the temporal coherence in both depth feature space and 3D voxel space for SS-MDE, shifting the “point-to-point” alignment paradigm to the “region-to-region” one. Compared with the photometric consistency loss as well as the rigid point cloud alignment loss, the proposed DFA and VDA losses are more robust owing to the strong representation power of deep features as well as the high tolerance of voxel density to the aforementioned challenges. Experimental results on several outdoor benchmarks show that our method outperforms current state-of-the-art techniques. Extensive ablation study and analysis validate the effectiveness of the proposed losses, especially in challenging scenes. The code and models are available at https://github.com/sunnyHelen/RCVC-depth.
引用
收藏
页码:495 / 513
页数:18
相关论文
共 50 条
  • [31] Self-supervised Cross-view Representation Reconstruction for Change Captioning
    Tu, Yunbin
    Li, Liang
    Su, Li
    Zha, Zheng-Jun
    Yan, Chenggang
    Huang, Qingming
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 2793 - 2803
  • [32] Monocular Depth Estimation via Self-Supervised Self-Distillation
    Hu, Haifeng
    Feng, Yuyang
    Li, Dapeng
    Zhang, Suofei
    Zhao, Haitao
    SENSORS, 2024, 24 (13)
  • [33] Self-Supervised Monocular Depth Estimation by Digging into Uncertainty Quantification
    Li, Yuan-Zhen
    Zheng, Sheng-Jie
    Tan, Zi-Xin
    Cao, Tuo
    Luo, Fei
    Xiao, Chun-Xia
    JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY, 2023, 38 (03) : 510 - 525
  • [34] Self-supervised monocular image depth learning and confidence estimation
    Chen, Long
    Tang, Wen
    Wan, Tao Ruan
    John, Nigel W.
    NEUROCOMPUTING, 2020, 381 : 272 - 281
  • [35] Self-supervised Learning for Dense Depth Estimation in Monocular Endoscopy
    Liu, Xingtong
    Sinha, Ayushi
    Unberath, Mathias
    Ishii, Masaru
    Hager, Gregory D.
    Taylor, Russell H.
    Reiter, Austin
    OR 2.0 CONTEXT-AWARE OPERATING THEATERS, COMPUTER ASSISTED ROBOTIC ENDOSCOPY, CLINICAL IMAGE-BASED PROCEDURES, AND SKIN IMAGE ANALYSIS, OR 2.0 2018, 2018, 11041 : 128 - 138
  • [36] Frequency-Aware Self-Supervised Monocular Depth Estimation
    Chen, Xingyu
    Li, Thomas H.
    Zhang, Ruonan
    Li, Ge
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 5797 - 5806
  • [37] MonoVAN: Visual Attention for Self-Supervised Monocular Depth Estimation
    Indyk, Ilia
    Makarov, Ilya
    2023 IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY, ISMAR, 2023, : 1211 - 1220
  • [38] Self-Supervised Scale Recovery for Monocular Depth and Egomotion Estimation
    Wagstaff, Brandon
    Kelly, Jonathan
    2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 2620 - 2627
  • [39] Graph semantic information for self-supervised monocular depth estimation
    Zhang, Dongdong
    Wang, Chunping
    Wang, Huiying
    Fu, Qiang
    PATTERN RECOGNITION, 2024, 156
  • [40] Exploring the vulnerability of self-supervised monocular depth estimation models
    Hou, Ruitao
    Mo, Kanghua
    Long, Yucheng
    Li, Ning
    Rao, Yuan
    INFORMATION SCIENCES, 2024, 677