Illumination Insensitive Monocular Depth Estimation Based on Scene Object Attention and Depth Map Fusion

被引:0
|
作者
Wen, Jing [1 ,2 ]
Ma, Haojiang [1 ,2 ]
Yang, Jie [1 ,2 ]
Zhang, Songsong [1 ,2 ]
机构
[1] Shanxi Univ, Taiyuan, Peoples R China
[2] Minist Educ, Key Lab Comp Intelligence & Chinese Proc, Taiyuan, Peoples R China
关键词
Monocular depth estimation; Scene object attention; Weighted depth map fusion; Image enhancement; Illumination insensitivity;
D O I
10.1007/978-981-99-8549-4_30
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Monocular depth estimation (MDE) is a crucial but challenging computer vision (CV) task which suffers from lighting sensitivity, blurring of neighboring depth edges, and object omissions. To address these problems, we propose an illumination insensitive monocular depth estimation method based on scene object attention and depth map fusion. Firstly, we design a low-light image selection algorithm, incorporated with the EnlightenGAN model, to improve the image quality of the training dataset and reduce the influence of lighting on depth estimation. Secondly, we develop a scene object attention mechanism (SOAM) to address the issue of incomplete depth information in natural scenes. Thirdly, we design a weighted depth map fusion (WDMF) module to fuse depth maps with various visual granularity and depth information, effectively resolving the problem of blurred depth map edges. Extensive experiments on the KITTI dataset demonstrate that our method effectively reduces the sensitivity of the depth estimation model to light and yields depth maps with more complete scene object contours.
引用
收藏
页码:358 / 370
页数:13
相关论文
共 50 条
  • [31] Depth Estimation of a Deformable Object via a Monocular Camera
    Jiang, Guolai
    Jin, Shaokun
    Ou, Yongsheng
    Zhou, Shoujun
    APPLIED SCIENCES-BASEL, 2019, 9 (07):
  • [32] Trap Attention: Monocular Depth Estimation with Manual Traps
    Ning, Chao
    Gan, Hongping
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 5033 - 5043
  • [33] Unsupervised Monocular Depth Estimation With Channel and Spatial Attention
    Wang, Zhuping
    Dai, Xinke
    Guo, Zhanyu
    Huang, Chao
    Zhang, Hao
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (06) : 7860 - 7870
  • [34] YOLO MDE: Object Detection with Monocular Depth Estimation
    Yu, Jongsub
    Choi, Hyukdoo
    ELECTRONICS, 2022, 11 (01)
  • [35] CATNet: Convolutional attention and transformer for monocular depth estimation
    Tang, Shuai
    Lu, Tongwei
    Liu, Xuanxuan
    Zhou, Huabing
    Zhang, Yanduo
    PATTERN RECOGNITION, 2024, 145
  • [36] Attention Mechanism Used in Monocular Depth Estimation: An Overview
    Li, Yundong
    Wei, Xiaokun
    Fan, Hanlu
    APPLIED SCIENCES-BASEL, 2023, 13 (17):
  • [37] Dual-Attention Mechanism for Monocular Depth Estimation
    Chiu, Chui-Hong
    Astuti, Lia
    Lin, Yu-Chen
    Hung, Ming-Ku
    2024 16TH INTERNATIONAL CONFERENCE ON COMPUTER AND AUTOMATION ENGINEERING, ICCAE 2024, 2024, : 456 - 460
  • [38] Monocular Depth and Velocity Estimation Based on Multi-Cue Fusion
    Qi, Chunyang
    Zhao, Hongxiang
    Song, Chuanxue
    Zhang, Naifu
    Song, Sinxin
    Xu, Haigang
    Xiao, Feng
    MACHINES, 2022, 10 (05)
  • [39] Self-Supervised Monocular Scene Decomposition and Depth Estimation
    Safadoust, Sadra
    Guney, Fatma
    2021 INTERNATIONAL CONFERENCE ON 3D VISION (3DV 2021), 2021, : 627 - 636
  • [40] Motion Estimation of Depth Map Based on Perceptual Attention Region
    Lee, Pei-Jun
    Cheng, Chun-Yuan
    THIRD INTERNATIONAL CONFERENCE ON INFORMATION SECURITY AND INTELLIGENT CONTROL (ISIC 2012), 2012, : 206 - 209