CFDepthNet: Monocular Depth Estimation Introducing Coordinate Attention and Texture Features

被引:0
|
作者
Wei, Feng [1 ]
Zhu, Jie [1 ]
Wang, Huibin [1 ]
Shen, Jie [1 ]
机构
[1] Hohai Univ, Sch Comp & Informat, Nanjing 211100, Peoples R China
基金
中国国家自然科学基金;
关键词
Coordinate attention; Texture feature metric loss; Photometric error loss; Monocular depth estimation;
D O I
10.1007/s11063-024-11477-4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Handling the depth estimation of low-texture regions using photometric error loss is a challenge due to the difficulty of achieving convergence due to the presence of multiple local minima for pixels in low-texture regions (or even no-texture regions). In this paper, based on the photometric loss, we also introduce texture feature metric loss as a constraint and combine the coordinate attention mechanism to improve the depth map's texture quality and edge detail. This paper uses a simple yet compact network structure, a unique loss function, and a relatively flexible embedded attention module, which is more effective and easier to arrange in robotic platforms with weak arithmetic power. The tests show that our network structure not only shows high quality and state-of-the-art results on the KITTI dataset, but the same training results also perform well on the cityscapes and Make3D datasets.
引用
收藏
页数:17
相关论文
共 50 条
  • [1] Fusing Events and Frames with Coordinate Attention Gated Recurrent Unit for Monocular Depth Estimation
    Duan, Huimei
    Guo, Chenggang
    Ou, Yuan
    SENSORS, 2024, 24 (23)
  • [2] EdgeConv with Attention Module for Monocular Depth Estimation
    Lee, Minhyeok
    Hwang, Sangwon
    Park, Chaewon
    Lee, Sangyoun
    2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 2364 - 2373
  • [3] Bidirectional Attention Network for Monocular Depth Estimation
    Aich, Shubhra
    Vianney, Jean Marie Uwabeza
    Islam, Md Amirul
    Kaur, Mannat
    Liu, Bingbing
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 11746 - 11752
  • [4] Monocular Depth Estimation with Adaptive Geometric Attention
    Naderi, Taher
    Sadovnik, Amir
    Hayward, Jason
    Qi, Hairong
    2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 617 - 627
  • [5] Depth-Relative Self Attention for Monocular Depth Estimation
    Shim, Kyuhong
    Kim, Jiyoung
    Lee, Gusang
    Shim, Byonghyo
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 1396 - 1404
  • [6] Trap Attention: Monocular Depth Estimation with Manual Traps
    Ning, Chao
    Gan, Hongping
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 5033 - 5043
  • [7] Unsupervised Monocular Depth Estimation With Channel and Spatial Attention
    Wang, Zhuping
    Dai, Xinke
    Guo, Zhanyu
    Huang, Chao
    Zhang, Hao
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (06) : 7860 - 7870
  • [8] CATNet: Convolutional attention and transformer for monocular depth estimation
    Tang, Shuai
    Lu, Tongwei
    Liu, Xuanxuan
    Zhou, Huabing
    Zhang, Yanduo
    PATTERN RECOGNITION, 2024, 145
  • [9] Attention Mechanism Used in Monocular Depth Estimation: An Overview
    Li, Yundong
    Wei, Xiaokun
    Fan, Hanlu
    APPLIED SCIENCES-BASEL, 2023, 13 (17):
  • [10] Dual-Attention Mechanism for Monocular Depth Estimation
    Chiu, Chui-Hong
    Astuti, Lia
    Lin, Yu-Chen
    Hung, Ming-Ku
    2024 16TH INTERNATIONAL CONFERENCE ON COMPUTER AND AUTOMATION ENGINEERING, ICCAE 2024, 2024, : 456 - 460