Learning-Based View Synthesis for Light Field Cameras

被引:530
|
作者
Kalantari, Nima Khademi [1 ]
Wang, Ting-Chun [2 ]
Ramamoorthi, Ravi [1 ]
机构
[1] Univ Calif San Diego, La Jolla, CA 92093 USA
[2] Univ Calif Berkeley, Berkeley, CA 94720 USA
来源
ACM TRANSACTIONS ON GRAPHICS | 2016年 / 35卷 / 06期
基金
美国国家科学基金会;
关键词
view synthesis; light field; convolutional neural network; disparity estimation;
D O I
10.1145/2980179.2980251
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
With the introduction of consumer light field cameras, light field imaging has recently become widespread. However, there is an inherent trade-off between the angular and spatial resolution, and thus, these cameras often sparsely sample in either spatial or angular domain. In this paper, we use machine learning to mitigate this trade-off. Specifically, we propose a novel learning-based approach to synthesize new views from a sparse set of input views. We build upon existing view synthesis techniques and break down the process into disparity and color estimation components. We use two sequential convolutional neural networks to model these two components and train both networks simultaneously by minimizing the error between the synthesized and ground truth images. We show the performance of our approach using only four corner sub-aperture views from the light fields captured by the Lytro Illum camera. Experimental results show that our approach synthesizes high-quality images that are superior to the state-of-the-art techniques on a variety of challenging real-world scenes. We believe our method could potentially decrease the required angular resolution of consumer light field cameras, which allows their spatial resolution to increase.
引用
收藏
页数:10
相关论文
共 50 条
  • [31] Subjective Evaluation of Light Field Image Compression Methods based on View Synthesis
    Bakir, Nader
    Fezza, Sid Ahmed
    Hamidouche, Wassim
    Samrouth, Khouloud
    Deforges, Olivier
    2019 27TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2019,
  • [32] Depth-assisted calibration on learning-based factorization for a compressive light field display
    Sun, Y. A. N. G. F. A. N.
    LI, Z. H. U.
    Wang, S. H. I. Z. H. E. N. G.
    Gao, W. E., I
    OPTICS EXPRESS, 2023, 31 (04) : 5399 - 5413
  • [33] HDR light field imaging of dynamic scenes: A learning-based method and a benchmark dataset
    Chen, Yeyao
    Jiang, Gangyi
    Yu, Mei
    Jin, Chongchong
    Xu, Haiyong
    Ho, Yo -Sung
    PATTERN RECOGNITION, 2024, 150
  • [34] Image-Based Visual Servoing With Light Field Cameras
    Tsai, Dorian
    Dansereau, Donald G.
    Peynot, Thierry
    Corke, Peter
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2017, 2 (02): : 912 - 919
  • [35] Learning-Based Synthesis of Safety Controllers
    Neider, Daniel
    Markgraf, Oliver
    2019 FORMAL METHODS IN COMPUTER AIDED DESIGN (FMCAD), 2019, : 120 - 128
  • [36] Multi-view learning-based heterogeneous network representation learning
    Chen, Lei
    Li, Yuan
    Deng, Xingye
    JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES, 2023, 35 (10)
  • [37] Learning-based cursive handwriting synthesis
    Wang, J
    Wu, CY
    Xu, YQ
    Shum, HY
    Ji, L
    EIGHTH INTERNATIONAL WORKSHOP ON FRONTIERS IN HANDWRITING RECOGNITION: PROCEEDINGS, 2002, : 157 - 162
  • [38] HIGH-QUALITY VIRTUAL VIEW SYNTHESIS FOR LIGHT FIELD CAMERAS USING MULTI-LOSS CONVOLUTIONAL NEURAL NETWORKS
    Nian, Zicheng
    Jung, Cheolkon
    2018 25TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2018, : 2605 - 2609
  • [39] A Novel Deep Learning-based Disocclusion Hole-Filling Approach for Stereoscopic View Synthesis
    Liu, Wei
    Cui, Mingyue
    Ma, Liyan
    IAENG International Journal of Applied Mathematics, 2023, 53 (02)
  • [40] Deep learning-based strategies for the detection and tracking of drones using several cameras
    Unlu E.
    Zenou E.
    Riviere N.
    Dupouy P.-E.
    IPSJ Transactions on Computer Vision and Applications, 2019, 11 (01):