FoV-NeRF: Foveated Neural Radiance Fields for Virtual Reality

被引:47
|
作者
Deng, Nianchen [1 ]
He, Zhenyi [2 ]
Ye, Jiannan [1 ]
Duinkharjav, Budmonde [3 ]
Chakravarthula, Praneeth [4 ]
Yang, Xubo [1 ,5 ]
Sun, Qi [6 ]
机构
[1] Shanghai Jiao Tong Univ, Sch Software, Shanghai, Peoples R China
[2] NYU, Dept Comp Sci, New York, NY 10003 USA
[3] NYU, Immers Comp Lab, New York, NY 10003 USA
[4] Univ N Carolina, Comp Sci, Chapel Hill, NC USA
[5] Peng Cheng Lab, Shenzhen, Peoples R China
[6] NYU, Tandon Sch Engn, New York, NY 10003 USA
关键词
Virtual Reality; Gaze-Contingent Graphics; Neural Representation; Foveated Rendering;
D O I
10.1109/TVCG.2022.3203102
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Virtual Reality (VR) is becoming ubiquitous with the rise of consumer displays and commercial VR platforms. Such displays require low latency and high quality rendering of synthetic imagery with reduced compute overheads. Recent advances in neural rendering showed promise of unlocking new possibilities in 3D computer graphics via image-based representations of virtual or physical environments. Specifically, the neural radiance fields (NeRF) demonstrated that photo-realistic quality and continuous view changes of 3D scenes can be achieved without loss of view-dependent effects. While NeRF can significantly benefit rendering for VR applications, it faces unique challenges posed by high field-of-view, high resolution, and stereoscopic/egocentric viewing, typically causing low quality and high latency of the rendered images. In VR, this not only harms the interaction experience but may also cause sickness. To tackle these problems toward six-degrees-of-freedom, egocentric, and stereo NeRF in VR, we present the first gaze-contingent 3D neural representation and view synthesis method. We incorporate the human psychophysics of visual- and stereo-acuity into an egocentric neural representation of 3D scenery. We then jointly optimize the latency/performance and visual quality while mutually bridging human perception and neural scene synthesis to achieve perceptually high-quality immersive interaction. We conducted both objective analysis and subjective studies to evaluate the effectiveness of our approach. We find that our method significantly reduces latency (up to 99% time reduction compared with NeRF) without loss of high-fidelity rendering (perceptually identical to full-resolution ground truth). The presented approach may serve as the first step toward future VR/AR systems that capture, teleport, and visualize remote environments in real-time.
引用
收藏
页码:3854 / 3864
页数:11
相关论文
共 50 条
  • [1] nerf2nerf: Pairwise Registration of Neural Radiance Fields
    Goli, Lily
    Rebain, Daniel
    Sabour, Sara
    Garg, Animesh
    Tagliasacchi, Andrea
    2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2023), 2023, : 9354 - 9361
  • [2] Magic NeRF lens: interactive fusion of neural radiance fields for virtual facility inspection
    Li, Ke
    Schmidt, Susanne
    Rolff, Tim
    Bacher, Reinhard
    Leemans, Wim
    Steinicke, Frank
    FRONTIERS IN VIRTUAL REALITY, 2024, 5
  • [3] Interactive VRS-NeRF: Lightning fast Neural Radiance Field Rendering for Virtual Reality
    Rolff, Tim
    Li, Ke
    Hertel, Julia
    Schmidt, Susanne
    Frintrop, Simone
    Steinicke, Frank
    ACM SYMPOSIUM ON SPATIAL USER INTERACTION, SUI 2023, 2023,
  • [4] D-NeRF: Neural Radiance Fields for Dynamic Scenes
    Pumarola, Albert
    Corona, Enric
    Pons-Moll, Gerard
    Moreno-Noguer, Francesc
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 10313 - 10322
  • [5] Strata-NeRF : Neural Radiance Fields for Stratified Scenes
    Dhiman, Ankit
    Srinath, R.
    Rangwani, Harsh
    Parihar, Rishubh
    Boregowda, Lokesh R.
    Sridhar, Srinath
    Babu, R. Venkatesh
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 17557 - 17568
  • [6] DReg-NeRF: Deep Registration for Neural Radiance Fields
    Chen, Yu
    Lee, Gim Hee
    arXiv, 2023,
  • [7] NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
    Mildenhall, Ben
    Srinivasan, Pratul P.
    Tancik, Matthew
    Barron, Jonathan T.
    Ramamoorthi, Ravi
    Ng, Ren
    COMMUNICATIONS OF THE ACM, 2022, 65 (01) : 99 - 106
  • [8] NeRF-Editing: Geometry Editing of Neural Radiance Fields
    Yuan, Yu-Jie
    Sun, Yang-Tian
    Lai, Yu-Kun
    Ma, Yuewen
    Jia, Rongfei
    Gao, Lin
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 18332 - 18343
  • [9] SeaThru-NeRF: Neural Radiance Fields in Scattering Media
    Levy, Deborah
    Peleg, Amit
    Pearl, Naama
    Rosenbaum, Dan
    Akkaynak, Derya
    Korman, Simon
    Treibitz, Tali
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 56 - 65
  • [10] NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections
    Martin-Brualla, Ricardo
    Radwan, Noha
    Sajjadi, Mehdi S. M.
    Barron, Jonathan T.
    Dosovitskiy, Alexey
    Duckworth, Daniel
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 7206 - 7215