Deep Digging into the Generalization of Self-Supervised Monocular Depth Estimation

被引:0
|
作者
Bae, Jinwoo [1 ]
Moon, Sungho [1 ]
Im, Sunghoon [1 ]
机构
[1] DGIST, Dept Elect Engn & Comp Sci, Daegu, South Korea
基金
新加坡国家研究基金会;
关键词
VISION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Self-supervised monocular depth estimation has been widely studied recently. Most of the work has focused on improving performance on benchmark datasets, such as KITTI, but has offered a few experiments on generalization performance. In this paper, we investigate the backbone net-works (e.g. CNNs, Transformers, and CNN-Transformer hybrid models) toward the generalization of monocular depth estimation. We first evaluate state-of-the-art models on diverse public datasets, which have never been seen during the network training. Next, we investigate the effects of texture-biased and shape-biased representations using the various texture-shifted datasets that we generated. We observe that Transformers exhibit a strong shape bias and CNNs do a strong texture-bias. We also find that shape-biased models show better generalization performance for monocular depth estimation compared to texture-biased models. Based on these observations, we newly design a CNN-Transformer hybrid network with a multi-level adaptive feature fusion module, called MonoFormer. The design intuition behind MonoFormer is to increase shape bias by employing Transformers while compensating for the weak locality bias of Transformers by adaptively fusing multi-level representations. Extensive experiments show that the proposed method achieves state-of-the-art performance with various public datasets. Our method also shows the best generalization ability among the competitive methods.
引用
收藏
页码:187 / 196
页数:10
相关论文
共 50 条
  • [21] Self-Supervised Monocular Depth Estimation With Extensive Pretraining
    Choi, Hyukdoo
    IEEE ACCESS, 2021, 9 : 157236 - 157246
  • [22] Self-Supervised Monocular Depth Estimation with Extensive Pretraining
    Choi, Hyukdoo
    IEEE Access, 2021, 9 : 157236 - 157246
  • [23] Improving Domain Generalization in Self-supervised Monocular Depth Estimation via Stabilized Adversarial Training
    Yao, Yuanqi
    Wu, Gang
    Jiang, Kui
    Liu, Siao
    Kuai, Jian
    Liu, Xianming
    Jiang, Junjun
    COMPUTER VISION - ECCV 2024, PT XXIV, 2025, 15082 : 183 - 201
  • [24] SelfVIO: Self-supervised deep monocular Visual-Inertial Odometry and depth estimation
    Almalioglu, Yasin
    Turan, Mehmet
    Saputra, Muhamad Risqi U.
    de Gusmao, Pedro P. B.
    Markham, Andrew
    Trigoni, Niki
    NEURAL NETWORKS, 2022, 150 : 119 - 136
  • [25] Monocular Depth Estimation via Self-Supervised Self-Distillation
    Hu, Haifeng
    Feng, Yuyang
    Li, Dapeng
    Zhang, Suofei
    Zhao, Haitao
    SENSORS, 2024, 24 (13)
  • [26] Self-supervised monocular image depth learning and confidence estimation
    Chen, Long
    Tang, Wen
    Wan, Tao Ruan
    John, Nigel W.
    NEUROCOMPUTING, 2020, 381 : 272 - 281
  • [27] Self-supervised Learning for Dense Depth Estimation in Monocular Endoscopy
    Liu, Xingtong
    Sinha, Ayushi
    Unberath, Mathias
    Ishii, Masaru
    Hager, Gregory D.
    Taylor, Russell H.
    Reiter, Austin
    OR 2.0 CONTEXT-AWARE OPERATING THEATERS, COMPUTER ASSISTED ROBOTIC ENDOSCOPY, CLINICAL IMAGE-BASED PROCEDURES, AND SKIN IMAGE ANALYSIS, OR 2.0 2018, 2018, 11041 : 128 - 138
  • [28] Frequency-Aware Self-Supervised Monocular Depth Estimation
    Chen, Xingyu
    Li, Thomas H.
    Zhang, Ruonan
    Li, Ge
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 5797 - 5806
  • [29] MonoVAN: Visual Attention for Self-Supervised Monocular Depth Estimation
    Indyk, Ilia
    Makarov, Ilya
    2023 IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY, ISMAR, 2023, : 1211 - 1220
  • [30] Self-Supervised Scale Recovery for Monocular Depth and Egomotion Estimation
    Wagstaff, Brandon
    Kelly, Jonathan
    2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 2620 - 2627