Spatially variant biases considered self-supervised depth estimation based on laparoscopic videos

被引:1
|
作者
Li, Wenda [1 ]
Hayashi, Yuichiro [1 ]
Oda, Masahiro [1 ,2 ]
Kitasaka, Takayuki [3 ]
Misawa, Kazunari [4 ]
Mori, Kensaku [1 ,5 ,6 ]
机构
[1] Nagoya Univ, Grad Sch Informat, Nagoya, Aichi, Japan
[2] Nagoya Univ, Informat & Commun, Nagoya, Aichi, Japan
[3] Aichi Inst Technol, Fac Informat Sci, Toyota, Japan
[4] Aichi Canc Ctr Hosp, Dept Gastroenterol Surg, Nagoya, Aichi, Japan
[5] Nagoya Univ, Informat Technol Ctr, Nagoya, Aichi, Japan
[6] Natl Inst Informat, Res Ctr Med Bigdata, Tokyo, Japan
关键词
Depth estimation; laparoscopic videos; self-supervised; NETWORKS; NET;
D O I
10.1080/21681163.2021.2015723
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Depth estimation is an essential tool in obtaining depth information for robotic surgery and augmented reality technology in the current laparoscopic surgery robot system. Since there is a lack of ground-truth for depth values and laparoscope motions during operation, depth estimation networks have difficulties in predicting depth maps from laparoscopic images under the supervised strategy. It is challenging to generate the correct depth maps for the different environments from abdominal images. To tackle these problems, we propose a novel monocular self-supervised depth estimation network with sparse nest architecture. We design a non-local block to capture broader and deeper context features that can further enhance the scene-variant generalisation capacity of the network for the differences in datasets. Moreover, we introduce an improved multi-mask feature in the loss function to tackle the classical occlusion problem based on the time-series information from stereo videos. We also use heteroscedastic aleatoric uncertainty to reduce the effect of noisy data for depth estimation. We compared our proposed method with other existing methods for different scenes in datasets. The experimental results show that the proposed model outperformed the state-of-the-art models qualitatively and quantitatively.
引用
收藏
页码:274 / 282
页数:9
相关论文
共 50 条
  • [21] TransDSSL: Transformer Based Depth Estimation via Self-Supervised Learning
    Han, Daechan
    Shin, Jeongmin
    Kim, Namil
    Hwang, Soonmin
    Choi, Yukyung
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (04): : 10969 - 10976
  • [22] Self-supervised Depth Estimation based on Feature Sharing and Consistency Constraints
    Mendoza, Julio
    Pedrini, Helio
    PROCEEDINGS OF THE 15TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS, VOL 5: VISAPP, 2020, : 134 - 141
  • [23] Self-Supervised Learning of Monocular Depth Estimation Based on Progressive Strategy
    Wang, Huachun
    Sang, Xinzhu
    Chen, Duo
    Wang, Peng
    Yan, Binbin
    Qi, Shuai
    Ye, Xiaoqian
    Yao, Tong
    IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, 2021, 7 : 375 - 383
  • [24] Depth estimation algorithm of monocular image based on self-supervised learning
    Bai L.
    Liu L.-J.
    Li X.-A.
    Wu S.
    Liu R.-Q.
    Jilin Daxue Xuebao (Gongxueban)/Journal of Jilin University (Engineering and Technology Edition), 2023, 53 (04): : 1139 - 1145
  • [25] TinyDepth: Lightweight self-supervised monocular depth estimation based on transformer
    Cheng, Zeyu
    Zhang, Yi
    Yu, Yang
    Song, Zhe
    Tang, Chengkai
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 138
  • [26] SVT-SDE: Spatiotemporal Vision Transformers-Based Self-Supervised Depth Estimation in Stereoscopic Surgical Videos
    Tao, Rong
    Huang, Baoru
    Zou, Xiaoyang
    Zheng, Guoyan
    IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS, 2023, 5 (01): : 42 - 53
  • [27] Self-supervised Learning of Depth and Camera Motion from 360° Videos
    Wang, Fu-En
    Hu, Hou-Ning
    Cheng, Hsien-Tzu
    Lin, Juan-Ting
    Yang, Shang-Ta
    Shih, Meng-Li
    Chu, Hung-Kuo
    Sun, Min
    COMPUTER VISION - ACCV 2018, PT V, 2019, 11365 : 53 - 68
  • [28] Joint Self-Supervised Monocular Depth Estimation and SLAM
    Xing, Xiaoxia
    Cai, Yinghao
    Lu, Tao
    Yang, Yiping
    Wen, Dayong
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 4030 - 4036
  • [29] Towards Keypoint Guided Self-supervised Depth Estimation
    Bartol, Kristijan
    Bojanic, David
    Petkovic, Tomislav
    Pribanic, Tomislav
    Donoso, Yago
    VISAPP: PROCEEDINGS OF THE 15TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS, VOL 4: VISAPP, 2020, : 583 - 589
  • [30] Semantically guided self-supervised monocular depth estimation
    Lu, Xiao
    Sun, Haoran
    Wang, Xiuling
    Zhang, Zhiguo
    Wang, Haixia
    IET IMAGE PROCESSING, 2022, 16 (05) : 1293 - 1304