Spatially variant biases considered self-supervised depth estimation based on laparoscopic videos

被引:1
|
作者
Li, Wenda [1 ]
Hayashi, Yuichiro [1 ]
Oda, Masahiro [1 ,2 ]
Kitasaka, Takayuki [3 ]
Misawa, Kazunari [4 ]
Mori, Kensaku [1 ,5 ,6 ]
机构
[1] Nagoya Univ, Grad Sch Informat, Nagoya, Aichi, Japan
[2] Nagoya Univ, Informat & Commun, Nagoya, Aichi, Japan
[3] Aichi Inst Technol, Fac Informat Sci, Toyota, Japan
[4] Aichi Canc Ctr Hosp, Dept Gastroenterol Surg, Nagoya, Aichi, Japan
[5] Nagoya Univ, Informat Technol Ctr, Nagoya, Aichi, Japan
[6] Natl Inst Informat, Res Ctr Med Bigdata, Tokyo, Japan
关键词
Depth estimation; laparoscopic videos; self-supervised; NETWORKS; NET;
D O I
10.1080/21681163.2021.2015723
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Depth estimation is an essential tool in obtaining depth information for robotic surgery and augmented reality technology in the current laparoscopic surgery robot system. Since there is a lack of ground-truth for depth values and laparoscope motions during operation, depth estimation networks have difficulties in predicting depth maps from laparoscopic images under the supervised strategy. It is challenging to generate the correct depth maps for the different environments from abdominal images. To tackle these problems, we propose a novel monocular self-supervised depth estimation network with sparse nest architecture. We design a non-local block to capture broader and deeper context features that can further enhance the scene-variant generalisation capacity of the network for the differences in datasets. Moreover, we introduce an improved multi-mask feature in the loss function to tackle the classical occlusion problem based on the time-series information from stereo videos. We also use heteroscedastic aleatoric uncertainty to reduce the effect of noisy data for depth estimation. We compared our proposed method with other existing methods for different scenes in datasets. The experimental results show that the proposed model outperformed the state-of-the-art models qualitatively and quantitatively.
引用
收藏
页码:274 / 282
页数:9
相关论文
共 50 条
  • [31] Self-supervised recurrent depth estimation with attention mechanisms
    Makarov I.
    Bakhanova M.
    Nikolenko S.
    Gerasimova O.
    PeerJ Computer Science, 2022, 8
  • [32] Self-supervised recurrent depth estimation with attention mechanisms
    Makarov, Ilya
    Bakhanova, Maria
    Nikolenko, Sergey
    Gerasimova, Olga
    PEERJ COMPUTER SCIENCE, 2022, 8
  • [33] Self-Supervised Monocular Scene Decomposition and Depth Estimation
    Safadoust, Sadra
    Guney, Fatma
    2021 INTERNATIONAL CONFERENCE ON 3D VISION (3DV 2021), 2021, : 627 - 636
  • [34] Learn to Adapt for Self-Supervised Monocular Depth Estimation
    Sun, Qiyu
    Yen, Gary G.
    Tang, Yang
    Zhao, Chaoqiang
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (11) : 15647 - 15659
  • [35] Self-supervised recurrent depth estimation with attention mechanisms
    Makarov, Ilya
    Bakhanova, Maria
    Nikolenko, Sergey
    Gerasimova, Olga
    PEERJ, 2022, 8
  • [36] Self-Supervised Monocular Depth Estimation With Multiscale Perception
    Zhang, Yourun
    Gong, Maoguo
    Li, Jianzhao
    Zhang, Mingyang
    Jiang, Fenlong
    Zhao, Hongyu
    IEEE Transactions on Image Processing, 2022, 31 : 3251 - 3266
  • [37] Self-Supervised Monocular Depth Estimation With Multiscale Perception
    Zhang, Yourun
    Gong, Maoguo
    Li, Jianzhao
    Zhang, Mingyang
    Jiang, Fenlong
    Zhao, Hongyu
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 3251 - 3266
  • [38] Self-Supervised Monocular Depth Estimation With Extensive Pretraining
    Choi, Hyukdoo
    IEEE ACCESS, 2021, 9 : 157236 - 157246
  • [39] Self-Supervised Monocular Depth Estimation with Extensive Pretraining
    Choi, Hyukdoo
    IEEE Access, 2021, 9 : 157236 - 157246
  • [40] Self-supervised monocular depth estimation for gastrointestinal endoscopy
    Liu, Yuying
    Zuo, Siyang
    COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2023, 238