CFDepthNet: Monocular Depth Estimation Introducing Coordinate Attention and Texture Features

被引:0
|
作者
Wei, Feng [1 ]
Zhu, Jie [1 ]
Wang, Huibin [1 ]
Shen, Jie [1 ]
机构
[1] Hohai Univ, Sch Comp & Informat, Nanjing 211100, Peoples R China
基金
中国国家自然科学基金;
关键词
Coordinate attention; Texture feature metric loss; Photometric error loss; Monocular depth estimation;
D O I
10.1007/s11063-024-11477-4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Handling the depth estimation of low-texture regions using photometric error loss is a challenge due to the difficulty of achieving convergence due to the presence of multiple local minima for pixels in low-texture regions (or even no-texture regions). In this paper, based on the photometric loss, we also introduce texture feature metric loss as a constraint and combine the coordinate attention mechanism to improve the depth map's texture quality and edge detail. This paper uses a simple yet compact network structure, a unique loss function, and a relatively flexible embedded attention module, which is more effective and easier to arrange in robotic platforms with weak arithmetic power. The tests show that our network structure not only shows high quality and state-of-the-art results on the KITTI dataset, but the same training results also perform well on the cityscapes and Make3D datasets.
引用
收藏
页数:17
相关论文
共 50 条
  • [21] Monocular depth estimation with multi-view attention autoencoder
    Jung, Geunho
    Yoon, Sang Min
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (23) : 33759 - 33770
  • [22] Boosting Monocular Depth Estimation with Channel Attention and Mutual Learning
    Takagi, Kazunari
    Ito, Seiya
    Kaneko, Naoshi
    Sumi, Kazuhiko
    2019 JOINT 8TH INTERNATIONAL CONFERENCE ON INFORMATICS, ELECTRONICS & VISION (ICIEV) AND 2019 3RD INTERNATIONAL CONFERENCE ON IMAGING, VISION & PATTERN RECOGNITION (ICIVPR) WITH INTERNATIONAL CONFERENCE ON ACTIVITY AND BEHAVIOR COMPUTING (ABC), 2019, : 228 - 233
  • [23] MAMo: Leveraging Memory and Attention for Monocular Video Depth Estimation
    Yasarla, Rajeev
    Cai, Hong
    Jeong, Jisoo
    Shi, Yunxiao
    Garrepalli, Risheek
    Porikli, Fatih
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 8720 - 8730
  • [24] DAttNet: monocular depth estimation network based on attention mechanisms
    Armando Astudillo
    Alejandro Barrera
    Carlos Guindel
    Abdulla Al-Kaff
    Fernando García
    Neural Computing and Applications, 2024, 36 : 3347 - 3356
  • [25] SAU-Net: Monocular Depth Estimation Combining Multi-Scale Features and Attention Mechanisms
    Zhao, Wei
    Song, Yunqing
    Wang, Tingting
    IEEE ACCESS, 2023, 11 : 137734 - 137746
  • [26] CNNapsule: A Lightweight Network with Fusion Features for Monocular Depth Estimation
    Wang, Yinchu
    Zhu, Haijiang
    Liu, Mengze
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT I, 2021, 12891 : 507 - 518
  • [27] Deep Monocular Depth Estimation Based on Content and Contextual Features
    Abdulwahab, Saddam
    Rashwan, Hatem A.
    Sharaf, Najwa
    Khalid, Saif
    Puig, Domenec
    SENSORS, 2023, 23 (06)
  • [28] Multi-level Feature Maps Attention for Monocular Depth Estimation
    Lee, Seunghoon
    Lee, Minhyeok
    Lee, Sangyoon
    2021 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS-ASIA (ICCE-ASIA), 2021,
  • [29] Self-Supervised Monocular Depth Estimation Based on Channel Attention
    Tao, Bo
    Chen, Xinbo
    Tong, Xiliang
    Jiang, Du
    Chen, Baojia
    PHOTONICS, 2022, 9 (06)
  • [30] MonoVAN: Visual Attention for Self-Supervised Monocular Depth Estimation
    Indyk, Ilia
    Makarov, Ilya
    2023 IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY, ISMAR, 2023, : 1211 - 1220