CFDepthNet: Monocular Depth Estimation Introducing Coordinate Attention and Texture Features

被引:0
|
作者
Wei, Feng [1 ]
Zhu, Jie [1 ]
Wang, Huibin [1 ]
Shen, Jie [1 ]
机构
[1] Hohai Univ, Sch Comp & Informat, Nanjing 211100, Peoples R China
基金
中国国家自然科学基金;
关键词
Coordinate attention; Texture feature metric loss; Photometric error loss; Monocular depth estimation;
D O I
10.1007/s11063-024-11477-4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Handling the depth estimation of low-texture regions using photometric error loss is a challenge due to the difficulty of achieving convergence due to the presence of multiple local minima for pixels in low-texture regions (or even no-texture regions). In this paper, based on the photometric loss, we also introduce texture feature metric loss as a constraint and combine the coordinate attention mechanism to improve the depth map's texture quality and edge detail. This paper uses a simple yet compact network structure, a unique loss function, and a relatively flexible embedded attention module, which is more effective and easier to arrange in robotic platforms with weak arithmetic power. The tests show that our network structure not only shows high quality and state-of-the-art results on the KITTI dataset, but the same training results also perform well on the cityscapes and Make3D datasets.
引用
收藏
页数:17
相关论文
共 50 条
  • [41] Illumination Insensitive Monocular Depth Estimation Based on Scene Object Attention and Depth Map Fusion
    Wen, Jing
    Ma, Haojiang
    Yang, Jie
    Zhang, Songsong
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT X, 2024, 14434 : 358 - 370
  • [42] Attention Mono-Depth: Attention-Enhanced Transformer for Monocular Depth Estimation of Volatile Kiln Burden Surface
    Liu, Cong
    Zhang, Chaobo
    Liang, Xiaojun
    Han, Zhiming
    Li, Yiming
    Yang, Chunhua
    Gui, Weihua
    Gao, Wen
    Wang, Xiaohao
    Li, Xinghui
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2025, 35 (02) : 1686 - 1699
  • [43] Multi-scale Residual Pyramid Attention Network for Monocular Depth Estimation
    Liu, Jing
    Zhang, Xiaona
    Li, Zhaoxin
    Mao, Tianlu
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 5137 - 5144
  • [44] GlobalDepth: Global-Aware Attention Model for Unsupervised Monocular Depth Estimation
    Yu, Huimin
    Li, Ruoqi
    Xiao, Zhuoling
    Yan, Bo
    2023 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, ISCAS, 2023,
  • [45] Deep neural networks with attention mechanism for monocular depth estimation on embedded devices
    Liu, Siping
    Tu, Xiaohan
    Xu, Cheng
    Li, Renfa
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2022, 131 : 137 - 150
  • [46] Unsupervised Monocular Depth Estimation Using Attention and Multi-Warp Reconstruction
    Ling, Chuanwu
    Zhang, Xiaogang
    Chen, Hua
    IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 : 2938 - 2949
  • [47] Monocular depth estimation with boundary attention mechanism and Shifted Window Adaptive Bins
    Hu, Hengjia
    Liang, Mengnan
    Wang, Congcong
    Zhao, Meng
    Shi, Fan
    Zhang, Chao
    Han, Yilin
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2024, 249
  • [48] Attention Attention Everywhere: Monocular Depth Prediction with Skip Attention
    Agarwal, Ashutosh
    Arora, Chetan
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 5850 - 5859
  • [49] The Monocular Depth Estimation Challenge
    Spencer, Jaime
    Qian, C. Stella
    Russell, Chris
    Hadfield, Simon
    Graf, Erich
    Adams, Wendy
    Schofield, Andrew J.
    Elder, James
    Bowden, Richard
    Cong, Heng
    Mattoccia, Stefano
    Poggi, Matteo
    Suri, Zeeshan Khan
    Tang, Yang
    Tosi, Fabio
    Wang, Hao
    Zhang, Youmin
    Zhang, Yusheng
    Zhao, Chaoqiang
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WORKSHOPS (WACVW), 2023, : 623 - 632
  • [50] Perceptual Monocular Depth Estimation
    Pan, Janice
    Bovik, Alan C.
    NEURAL PROCESSING LETTERS, 2021, 53 (02) : 1205 - 1228