The Constraints between Edge Depth and Uncertainty for Monocular Depth Estimation

被引:1
|
作者
Wu, Shouying [1 ]
Li, Wei [2 ]
Liang, Binbin [2 ]
Huang, Guoxin [1 ]
机构
[1] Sichuan Univ, Natl Key Lab Fundamental Sci Synthet Vis, Chengdu 610065, Peoples R China
[2] Sichuan Univ, Sch Aeronut & Astronaut, Chengdu 610065, Peoples R China
关键词
monocular depth estimation; self-supervised method; uncertainty estimation; VISION;
D O I
10.3390/electronics10243153
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The self-supervised monocular depth estimation paradigm has become an important branch of computer vision depth-estimation tasks. However, the depth estimation problem arising from object edge depth pulling or occlusion is still unsolved. The grayscale discontinuity of object edges leads to a relatively high depth uncertainty of pixels in these regions. We improve the geometric edge prediction results by taking uncertainty into account in the depth-estimation task. To this end, we explore how uncertainty affects this task and propose a new self-supervised monocular depth estimation technique based on multi-scale uncertainty. In addition, we introduce a teacher-student architecture in models and investigate the impact of different teacher networks on the depth and uncertainty results. We evaluate the performance of our paradigm in detail on the standard KITTI dataset. The experimental results show that the accuracy of our method increased from 87.7% to 88.2%, the AbsRel error rate decreased from 0.115 to 0.11, the SqRel error rate decreased from 0.903 to 0.822, and the RMSE error rate decreased from 4.863 to 4.686 compared with the benchmark Monodepth2. Our approach has a positive impact on the problem of texture replication or inaccurate object boundaries, producing sharper and smoother depth images.
引用
收藏
页数:15
相关论文
共 50 条
  • [21] Perceptual Monocular Depth Estimation
    Pan, Janice
    Bovik, Alan C.
    NEURAL PROCESSING LETTERS, 2021, 53 (02) : 1205 - 1228
  • [22] Perceptual Monocular Depth Estimation
    Janice Pan
    Alan C. Bovik
    Neural Processing Letters, 2021, 53 : 1205 - 1228
  • [23] Monocular Depth Estimation Using Relative Depth Maps
    Lee, Jae-Han
    Kim, Chang-Su
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 9721 - 9730
  • [24] Monocular Depth Estimation With Augmented Ordinal Depth Relationships
    Cao, Yuanzhouhan
    Zhao, Tianqi
    Xian, Ke
    Shen, Chunhua
    Cao, Zhiguo
    Xu, Shugong
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2020, 30 (08) : 2674 - 2682
  • [25] ON MONOCULAR DEPTH ESTIMATION AND UNCERTAINTY QUANTIFICATION USING CLASSIFICATION APPROACHES FOR REGRESSION
    Yu, Xuanlong
    Franchi, Gianni
    Aldea, Emanuel
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 1481 - 1485
  • [26] Self-Supervised Monocular Depth Estimation by Digging into Uncertainty Quantification
    Li, Yuan-Zhen
    Zheng, Sheng-Jie
    Tan, Zi-Xin
    Cao, Tuo
    Luo, Fei
    Xiao, Chun-Xia
    JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY, 2023, 38 (03) : 510 - 525
  • [27] Capturing Uncertainty in Monocular Depth Estimation: Towards Fuzzy Voxel Maps
    Buck, Andrew R.
    Anderson, Derek T.
    Camaioni, Raub
    Akers, Jack
    Luke, Robert H., III
    Keller, James M.
    2023 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS, FUZZ, 2023,
  • [28] Self-Supervised Monocular Depth Estimation with Multi-constraints
    Yang, Xinpeng
    Zhang, Sen
    Zhao, Baoyong
    2021 PROCEEDINGS OF THE 40TH CHINESE CONTROL CONFERENCE (CCC), 2021, : 8422 - 8427
  • [29] Self-Supervised Monocular Depth Estimation by Digging into Uncertainty Quantification
    Yuan-Zhen Li
    Sheng-Jie Zheng
    Zi-Xin Tan
    Tuo Cao
    Fei Luo
    Chun-Xia Xiao
    Journal of Computer Science and Technology, 2023, 38 : 510 - 525
  • [30] Monocular Depth Estimation via Deep Structured Models with Ordinal Constraints
    Ron, Daniel
    Duan, Kun
    Ma, Chongyang
    Xu, Ning
    Wang, Shenlong
    Hanumante, Sumant
    Sagar, Dhritiman
    2018 INTERNATIONAL CONFERENCE ON 3D VISION (3DV), 2018, : 570 - 577