Self-Supervised Monocular Depth Estimation With Isometric-Self-Sample-Based Learning

被引:2
|
作者
Cha, Geonho [1 ]
Jang, Ho-Deok [1 ]
Wee, Dongyoon [1 ]
机构
[1] NAVER Corp, Clova AI, Seongnam 13561, South Korea
关键词
Training; Estimation; Vehicle dynamics; Optical flow; Cameras; Three-dimensional displays; Point cloud compression; Autonomous vehicle navigation; deep learning methods; RGB-D perception; vision-based navigation;
D O I
10.1109/LRA.2022.3221871
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Managing the dynamic regions in the photometric loss formulation has been a main issue for handling the self-supervised depth estimation problem. Most previous methods have alleviated this issue by removing the dynamic regions in the photometric loss formulation based on the masks estimated from another module, making it difficult to fully utilize the training images. In this letter, to handle this problem, we propose an isometric self-sample-based learning (ISSL) method to fully utilize the training images in a simple yet effective way. The proposed method provides additional supervision during training using self-generated images that comply with pure static scene assumption. Specifically, the isometric self-sample generator synthesizes self-samples for each training image by applying random rigid transformations on the estimated depth. Thus both the generated self-samples and the corresponding training image always follow the static scene relation. Our method can serve as a plug-and-play module for two existing models without any architectural modifications. It provides additional supervision during training phase only. Thus, there is no additional overhead on base model parameters and computation during inference phase. These properties fit well with models oriented to real-time applications. We show that plugging our ISSL module into two existing models consistently improves the performance by a large margin. In addition, it also boosts the depth accuracy over different types of scene, i.e., outdoor scenes (KITTI and Make3D) and indoor scene (NYUv2), validating its high effectiveness.
引用
收藏
页码:2173 / 2180
页数:8
相关论文
共 50 条
  • [21] Self-Supervised Monocular Depth Estimation With Multiscale Perception
    Zhang, Yourun
    Gong, Maoguo
    Li, Jianzhao
    Zhang, Mingyang
    Jiang, Fenlong
    Zhao, Hongyu
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 3251 - 3266
  • [22] Self-supervised monocular depth estimation for gastrointestinal endoscopy
    Liu, Yuying
    Zuo, Siyang
    COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2023, 238
  • [23] Self-supervised monocular depth estimation with direct methods
    Wang, Haixia
    Sun, Yehao
    Wu, Q. M. Jonathan
    Lu, Xiao
    Wang, Xiuling
    Zhang, Zhiguo
    NEUROCOMPUTING, 2021, 421 : 340 - 348
  • [24] Self-supervised monocular depth estimation with direct methods
    Wang H.
    Sun Y.
    Wu Q.M.J.
    Lu X.
    Wang X.
    Zhang Z.
    Neurocomputing, 2021, 421 : 340 - 348
  • [25] Adaptive Self-supervised Depth Estimation in Monocular Videos
    Mendoza, Julio
    Pedrini, Helio
    IMAGE AND GRAPHICS (ICIG 2021), PT III, 2021, 12890 : 687 - 699
  • [26] Self-Supervised Monocular Depth Estimation With Extensive Pretraining
    Choi, Hyukdoo
    IEEE ACCESS, 2021, 9 : 157236 - 157246
  • [27] Self-Supervised Monocular Depth Estimation with Extensive Pretraining
    Choi, Hyukdoo
    IEEE Access, 2021, 9 : 157236 - 157246
  • [28] Monocular Depth Estimation via Self-Supervised Self-Distillation
    Hu, Haifeng
    Feng, Yuyang
    Li, Dapeng
    Zhang, Suofei
    Zhao, Haitao
    SENSORS, 2024, 24 (13)
  • [29] Monocular depth estimation for vision-based vehicles based on a self-supervised learning method
    Tektonidis, Marco
    Monnin, David
    AUTONOMOUS SYSTEMS: SENSORS, PROCESSING, AND SECURITY FOR VEHICLES AND INFRASTRUCTURE 2020, 2020, 11415
  • [30] TinyDepth: Lightweight self-supervised monocular depth estimation based on transformer
    Cheng, Zeyu
    Zhang, Yi
    Yu, Yang
    Song, Zhe
    Tang, Chengkai
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 138