Using full-scale feature fusion for self-supervised indoor depth estimation

被引:0
|
作者
Cheng, Deqiang [1 ]
Chen, Junhui [1 ]
Lv, Chen [1 ]
Han, Chenggong [1 ]
Jiang, He [1 ]
机构
[1] China Univ Min & Technol, Sch Informat & Control Engn, Xuzhou 221116, Peoples R China
基金
中国国家自然科学基金;
关键词
Monocular depth estimation; Feature fusion; Self-supervised; Indoor scenes; ResNeSt;
D O I
10.1007/s11042-023-16581-6
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Monocular depth estimation is a crucial task in computer vision, and self-supervised algorithms are gaining popularity due to their independence from expensive ground truth supervision. However, current self-supervised algorithms may not provide accurate estimation and may suffer from distorted boundaries when applied to indoor scenes. Combining multi-scale features is an important research direction in image segmentation to achieve accurate estimation and resolve boundary distortion. However, there are few studies on indoor self-supervised algorithms in this regard. To solve this issue, we propose a novel full-scale feature information fusion approach that includes a full-scale skip-connection and a full-scale feature fusion block. This approach can aggregate the high-level and low-level information of all scale feature maps during the network's encoding and decoding process to compensate for the network's loss of cross-layer feature information. The proposed full-scale feature fusion improves accuracy and reduces the decoder parameters. To fully exploit the superiority of the full-scale feature fusion module, we replace the encoder backbone from ResNet with the more advanced ResNeSt. Combining these two methods results in a significant improvement in prediction accuracy. We have extensively evaluated our approach on the indoor benchmark datasets NYU Depth V2 and ScanNet. Our experimental results demonstrate that our method outperforms existing algorithms, particularly on NYU Depth V2, where our precision is raised to 83.8%.
引用
收藏
页码:28215 / 28233
页数:19
相关论文
共 50 条
  • [41] Self-Supervised Monocular Scene Decomposition and Depth Estimation
    Safadoust, Sadra
    Guney, Fatma
    2021 INTERNATIONAL CONFERENCE ON 3D VISION (3DV 2021), 2021, : 627 - 636
  • [42] Self-supervised recurrent depth estimation with attention mechanisms
    Makarov I.
    Bakhanova M.
    Nikolenko S.
    Gerasimova O.
    PeerJ Computer Science, 2022, 8
  • [43] Learn to Adapt for Self-Supervised Monocular Depth Estimation
    Sun, Qiyu
    Yen, Gary G.
    Tang, Yang
    Zhao, Chaoqiang
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (11) : 15647 - 15659
  • [44] Self-Supervised Monocular Depth Estimation With Multiscale Perception
    Zhang, Yourun
    Gong, Maoguo
    Li, Jianzhao
    Zhang, Mingyang
    Jiang, Fenlong
    Zhao, Hongyu
    IEEE Transactions on Image Processing, 2022, 31 : 3251 - 3266
  • [45] Self-supervised recurrent depth estimation with attention mechanisms
    Makarov, Ilya
    Bakhanova, Maria
    Nikolenko, Sergey
    Gerasimova, Olga
    PEERJ, 2022, 8
  • [46] Self-Supervised Monocular Depth Estimation With Multiscale Perception
    Zhang, Yourun
    Gong, Maoguo
    Li, Jianzhao
    Zhang, Mingyang
    Jiang, Fenlong
    Zhao, Hongyu
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 3251 - 3266
  • [47] Self-Supervised Monocular Depth Estimation With Extensive Pretraining
    Choi, Hyukdoo
    IEEE ACCESS, 2021, 9 : 157236 - 157246
  • [48] Self-Supervised Monocular Depth Estimation with Extensive Pretraining
    Choi, Hyukdoo
    IEEE Access, 2021, 9 : 157236 - 157246
  • [49] Self-supervised monocular depth estimation for gastrointestinal endoscopy
    Liu, Yuying
    Zuo, Siyang
    COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2023, 238
  • [50] Self-supervised monocular depth estimation with direct methods
    Wang, Haixia
    Sun, Yehao
    Wu, Q. M. Jonathan
    Lu, Xiao
    Wang, Xiuling
    Zhang, Zhiguo
    NEUROCOMPUTING, 2021, 421 : 340 - 348