Learning-Based View Synthesis for Light Field Cameras

被引:530
|
作者
Kalantari, Nima Khademi [1 ]
Wang, Ting-Chun [2 ]
Ramamoorthi, Ravi [1 ]
机构
[1] Univ Calif San Diego, La Jolla, CA 92093 USA
[2] Univ Calif Berkeley, Berkeley, CA 94720 USA
来源
ACM TRANSACTIONS ON GRAPHICS | 2016年 / 35卷 / 06期
基金
美国国家科学基金会;
关键词
view synthesis; light field; convolutional neural network; disparity estimation;
D O I
10.1145/2980179.2980251
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
With the introduction of consumer light field cameras, light field imaging has recently become widespread. However, there is an inherent trade-off between the angular and spatial resolution, and thus, these cameras often sparsely sample in either spatial or angular domain. In this paper, we use machine learning to mitigate this trade-off. Specifically, we propose a novel learning-based approach to synthesize new views from a sparse set of input views. We build upon existing view synthesis techniques and break down the process into disparity and color estimation components. We use two sequential convolutional neural networks to model these two components and train both networks simultaneously by minimizing the error between the synthesized and ground truth images. We show the performance of our approach using only four corner sub-aperture views from the light fields captured by the Lytro Illum camera. Experimental results show that our approach synthesizes high-quality images that are superior to the state-of-the-art techniques on a variety of challenging real-world scenes. We believe our method could potentially decrease the required angular resolution of consumer light field cameras, which allows their spatial resolution to increase.
引用
收藏
页数:10
相关论文
共 50 条
  • [21] A Unified Learning-Based Framework for Light Field Reconstruction From Coded Projections
    Vadathya, Anil Kumar
    Girish, Sharath
    Mitra, Kaushik
    IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, 2020, 6 : 304 - 316
  • [22] Light Field Video Capture Using a Learning-Based Hybrid Imaging System
    Wang, Ting-Chun
    Zhu, Jun-Yan
    Kalantari, Nima Khademi
    Efros, Alexei A.
    Ramamoorthi, Ravi
    ACM TRANSACTIONS ON GRAPHICS, 2017, 36 (04):
  • [23] Learning-based high-efficiency compression framework for light field videos
    Wang, Bing
    Xiang, Wei
    Wang, Eric
    Peng, Qiang
    Gao, Pan
    Wu, Xiao
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (06) : 7527 - 7560
  • [24] Learning-based high-efficiency compression framework for light field videos
    Bing Wang
    Wei Xiang
    Eric Wang
    Qiang Peng
    Pan Gao
    Xiao Wu
    Multimedia Tools and Applications, 2022, 81 : 7527 - 7560
  • [25] Deep Learning-based Semantic Analysis of Sparse Light Field Ray Sets
    Chelli, Kelvin
    Tamboli, Roopak R.
    Herfet, Thorsten
    IEEE MMSP 2021: 2021 IEEE 23RD INTERNATIONAL WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING (MMSP), 2021,
  • [26] Dense-view synthesis for three-dimensional light-field display based on unsupervised learning
    Chen, Duo
    Sang, Xinzhu
    Wang, Peng
    Yu, Xunbo
    Yan, Binbin
    Wang, Huachun
    Ning, Mengyang
    Qi, Shuai
    Ye, Mown
    OPTICS EXPRESS, 2019, 27 (17) : 24624 - 24641
  • [27] Employing Light Field Cameras in Surveillance: An Analysis of Light Field Cameras in a Surveillance Scenario
    Higa, Rogerio Seiji
    Iano, Yuzo
    Leite, Ricardo Barroso
    Larico Chavez, Roger Fredy
    Arthur, Rangel
    3D RESEARCH, 2014, 5 (01): : 1 - 11
  • [28] Field of view in monocentric multiscale cameras
    Pang, Wubin
    Brady, David J.
    APPLIED OPTICS, 2018, 57 (24) : 6999 - 7005
  • [29] Feedback from Pixels: Output Regulation via Learning-Based Scene View Synthesis
    Abu-Khalaf, Murad
    Karaman, Sertac
    Rus, Daniela
    LEARNING FOR DYNAMICS AND CONTROL, VOL 144, 2021, 144
  • [30] A Learning-based Approach for Evaluating Scene Recognizability of a View
    Teng, Zhou
    Xiao, Jing
    2015 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2015, : 4265 - 4272