Unsupervised light field disparity estimation using confidence weight and occlusion-aware

被引:0
|
作者
Xiao, Bo [1 ]
Gao, Xiujing [2 ,3 ]
Zheng, Huadong [4 ]
Yang, Huibao [5 ]
Huang, Hongwu [1 ,2 ,3 ,5 ]
机构
[1] Hunan Univ, State Key Lab Adv Design & Mfg Vehicle Body, 2 Lushan South Rd, Changsha 410082, Peoples R China
[2] Fujian Univ Technol, Sch Smart Marine Sci & Engn, 69 Xuefu South Rd, Fuzhou 350118, Peoples R China
[3] Fujian Prov Key Lab Marine Smart Equipment, 69 Xuefu South Rd, Fuzhou 350118, Peoples R China
[4] Shanghai Univ, Dept Precis Mech Engn, 99 Shangda Rd, Shanghai 200444, Peoples R China
[5] Xiamen Univ, Sch Aerosp Engn, 4221-134 Xiangan North Rd, Xiamen 361102, Peoples R China
关键词
DEPTH; NETWORK; CAMERA; FUSION;
D O I
10.1016/j.optlaseng.2025.108928
中图分类号
O43 [光学];
学科分类号
070207 ; 0803 ;
摘要
Light field disparity estimation is a crucial topic in computer vision. Currently, deep learning methods have shown significantly improved performance compared to traditional methods, especially supervised learning approaches. However, the high cost of obtaining real-world depth/disparity data for training greatly limits the generalization ability of supervised learning methods. In this paper, we propose an unsupervised learning method for light field depth estimation by utilizing confidence weights to evaluate the reliability of disparity features. First, during the disparity estimation and inference process, we introduce confidence weights to assess the reliability of disparity features, assigning higher weights to non-occluded and low-noise areas to effectively handle errors caused by occlusion and noise. Second, we design an occlusion-aware network to predict occluded regions in the views, which addresses the interference of occluded regions when computing unsupervised loss during training, thus enhancing the overall estimation accuracy. Extensive experimental results show that our method outperforms traditional methods and some of the latest unsupervised learning methods.
引用
收藏
页数:9
相关论文
共 50 条
  • [21] Occlusion-Aware Unsupervised Learning of Depth From 4-D Light Fields
    Jin, Jing
    Hou, Junhui
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 2216 - 2228
  • [22] An Occlusion and Noise-Aware Stereo Framework Based on Light Field Imaging for Robust Disparity Estimation
    Yang, Da
    Cui, Zhenglong
    Sheng, Hao
    Chen, Rongshan
    Cong, Ruixuan
    Wang, Shuai
    Xiong, Zhang
    IEEE TRANSACTIONS ON COMPUTERS, 2024, 73 (03) : 764 - 777
  • [23] Learning occlusion-aware view synthesis for light fields
    Navarro, J.
    Sabater, N.
    PATTERN ANALYSIS AND APPLICATIONS, 2021, 24 (03) : 1319 - 1334
  • [24] Learning occlusion-aware view synthesis for light fields
    J. Navarro
    N. Sabater
    Pattern Analysis and Applications, 2021, 24 : 1319 - 1334
  • [25] Occlusion-Aware Hand Pose Estimation Using Hierarchical Mixture Density Network
    Ye, Qi
    Kim, Tae-Kyun
    COMPUTER VISION - ECCV 2018, PT X, 2018, 11214 : 817 - 834
  • [26] OCR-Pose: Occlusion-aware Contrastive Representation for Unsupervised 3D Human Pose Estimation
    Wang, Junjie
    Yu, Zhenbo
    Tong, Zhengyan
    Wang, Hang
    Liu, Jinxian
    Zhang, Wenjun
    Wu, Xiaoyan
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 5477 - 5485
  • [27] Towards Occlusion-Aware Pose Estimation of Surgical Suturing Threads
    Gu, Yun
    Yang, Jie
    Yang, Guang-Zhong
    IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 2023, 70 (02) : 581 - 591
  • [28] Part-Level Occlusion-Aware Human Pose Estimation
    Chu Z.
    Mi Q.
    Ma W.
    Xu S.
    Zhang X.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2022, 59 (12): : 2760 - 2769
  • [29] OCCLUSION-AWARE LAYERED SCENE RECOVERY FROM LIGHT FIELDS
    Lin, Yenting
    Tosic, Ivana
    Berkner, Kathrin
    2013 20TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP 2013), 2013, : 295 - 299
  • [30] Occlusion-aware Bi-directional Guided Network for Light Field Salient Object Detection
    Jing, Dong
    Zhang, Shuo
    Cong, Runmin
    Lin, Youfang
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 1692 - 1701