Deep Depth Estimation on 360° Images with a Double Quaternion Loss

被引:7
|
作者
Feng, Brandon Yushan [1 ]
Yao, Wangjue [1 ]
Liu, Zheyuan [2 ]
Varshney, Amitabh [1 ]
机构
[1] Univ Maryland, College Pk, MD 20742 USA
[2] Univ Virginia, Charlottesville, VA 22903 USA
基金
美国国家科学基金会;
关键词
PREDICTION;
D O I
10.1109/3DV50981.2020.00062
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
While 360 degrees images are becoming ubiquitous due to popularity of panoramic content, they cannot directly work with most of the existing depth estimation techniques developed for perspective images. In this paper, we present a deep-learning-based framework of estimating depth from 360 degrees images. We present an adaptive depth refinement procedure that refines depth estimates using normal estimates and pixel-wise uncertainty scores. We introduce double quaternion approximation to combine the loss of the joint estimation of depth and surface normal. Furthermore, we use the double quaternion formulation to also measure stereo consistency between the horizontally displaced depth maps, leading to a new loss function for training a depth estimation CNN. Results show that the new double-quaternionbased loss and the adaptive depth refinement procedure lead to better network performance. Our proposed method can be used with monocular as well as stereo images. When evaluated on several datasets, our method surpasses state-of-the-art methods on most metrics.
引用
收藏
页码:524 / 533
页数:10
相关论文
共 50 条
  • [41] ScanDMM: A Deep Markov Model of Scanpath Prediction for 360° Images
    Sui, Xiangjie
    Fang, Yuming
    Zhu, Hanwei
    Wang, Shiqi
    Wang, Zhou
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 6989 - 6999
  • [42] Face-aware Saliency Estimation Model for 360° Images
    Mazumdar, Pramit
    Arru, Giuliano
    Carli, Marco
    Battisti, Federica
    2019 27TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2019,
  • [43] A CONTENT-BASED APPROACH FOR SALIENCY ESTIMATION IN 360 IMAGES
    Mazumdar, Pramit
    Battisti, Federica
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 3197 - 3201
  • [44] Attentive Deep Stitching and Quality Assessment for 360° Omnidirectional Images
    Li, Jia
    Zhao, Yifan
    Ye, Weihua
    Yu, Kaiwen
    Ge, Shiming
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2020, 14 (01) : 209 - 221
  • [45] HRDFuse: Monocular 360° Depth Estimation by Collaboratively Learning Holistic-with-Regional Depth Distributions
    Ai, Hao
    Cao, Zidong
    Cao, Yan-Pei
    Shan, Ying
    Wang, Lin
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 13273 - 13282
  • [46] MS360: A Multi-Scale Feature Fusion Framework for 360 Monocular Depth Estimation
    Mohadikar, Payal
    Fan, Chuanmao
    Duan, Ye
    PROCEEDINGS OF THE 50TH GRAPHICS INTERFACE CONFERENCE, GI 2024, 2024,
  • [47] CAPDepth: 360 Monocular Depth Estimation by Content-Aware Projection
    Gao, Xu
    Shi, Yongqiang
    Zhao, Yaqian
    Wang, Yanan
    Wang, Jin
    Wu, Gang
    APPLIED SCIENCES-BASEL, 2025, 15 (02):
  • [48] MODE: Multi-view Omnidirectional Depth Estimation with 360° Cameras
    Li, Ming
    Jin, Xueqian
    Hu, Xuejiao
    Dai, Jingzhao
    Du, Sidan
    Li, Yang
    COMPUTER VISION - ECCV 2022, PT XXXIII, 2022, 13693 : 197 - 213
  • [49] Spherical View Synthesis for Self-Supervised 360° Depth Estimation
    Zioulis, Nikolaos
    Karakottas, Antonis
    Zarpalas, Dimitrios
    Alvarez, Federico
    Daras, Petros
    2019 INTERNATIONAL CONFERENCE ON 3D VISION (3DV 2019), 2019, : 690 - 699
  • [50] EGformer: Equirectangular Geometry-biased Transformer for 360 Depth Estimation
    Yun, Ilwi
    Shin, Chanyong
    Lee, Hyunku
    Lee, Hyuk-Jae
    Rhee, Chae Eun
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 6078 - 6089