Fast radiance field reconstruction from sparse inputs

被引:0
|
作者
Lai, Song [1 ,2 ]
Cui, Linyan [1 ]
Yin, Jihao [1 ]
机构
[1] Beihang Univ, Sch Astronaut, Dept Aerosp Informat Engn, Beijing 100191, Peoples R China
[2] City Univ Hong Kong, Dept Comp Sci, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
3D reconstruction; Neural radiance field; Shape from silhouette; Novel view synthesis;
D O I
10.1016/j.patcog.2024.110863
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neural Radiance Field (NeRF) has emerged as a powerful method in data-driven 3D reconstruction because of its simplicity and state-of-the-art performance. However, NeRF requires densely captured calibrated images and lengthy training and rendering time to realize high-resolution reconstruction. Thus, we propose a fast radiance field reconstruction method from a sparse set of images with silhouettes. Our approach integrates NeRF with Shape from Silhouette, a traditional 3D reconstruction method that uses silhouette information to fit the shape of an object. To combine NeRF's implicit representation with Shape from Silhouette's explicit representation, we propose a novel explicit-implicit radiance field representation consisting of voxel grids with confidence and feature embedding for geometry and a multilayer perceptron network to decode view-dependent color emission for appearance. We propose to make the reconstructed geometry compact by taking advantage of silhouette images, which can avoid the majority of artifacts in sparse input scenarios and speed up training and rendering. We also apply voxel dilating and pruning to refine the geometry prediction. In addition, we impose a total variation regularization on our model to encourage a smooth radiance field. Experiments on the DTU and the NeRF-Synthetic datasets show that our algorithm surpasses the existing baselines in terms of efficiency and accuracy.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] HumanNeRF: Efficiently Generated Human Radiance Field from Sparse Inputs
    Zhao, Fuqiang
    Yang, Wei
    Zhang, Jiakai
    Lin, Pei
    Zhang, Yingliang
    Yu, Jingyi
    Xu, Lan
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 7733 - 7743
  • [2] Volume Feature Rendering for Fast Neural Radiance Field Reconstruction
    Han, Kang
    Xiang, Wei
    Yu, Lu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [3] MVSNeRF: Fast Generalizable Radiance Field Reconstruction from Multi-View Stereo
    Chen, Anpei
    Xu, Zexiang
    Zhao, Fuqiang
    Zhang, Xiaoshuai
    Xiang, Fanbo
    Yu, Jingyi
    Su, Hao
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 14104 - 14113
  • [4] PDRF: Progressively Deblurring Radiance Field for Fast Scene Reconstruction from Blurry Images
    Peng, Cheng
    Chellappa, Rama
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 2, 2023, : 2029 - 2037
  • [5] RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from Sparse Inputs
    Niemeyer, Michael
    Barron, Jonathan T.
    Mildenhall, Ben
    Sajjadi, Mehdi S. M.
    Geiger, Andreas
    Radwan, Noha
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 5470 - 5480
  • [6] REGULARIZING NEURAL RADIANCE FIELDS FROM SPARSE RGB-D INPUTS
    Li, Qian
    Multon, Franck
    Boukhayma, Adnane
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 2320 - 2324
  • [7] Where and How: Mitigating Confusion in Neural Radiance Fields from Sparse Inputs
    Bao, Yanqi
    Li, Yuxin
    Huo, Jing
    Ding, Tianyu
    Liang, Xinyue
    Li, Wenbin
    Gao, Yang
    arXiv, 2023,
  • [8] Where and How: Mitigating Confusion in Neural Radiance Fields from Sparse Inputs
    Bao, Yanqi
    Li, Yuxin
    Huo, Jing
    Ding, Tianyu
    Liang, Xinyue
    Li, Wenbin
    Gao, Yang
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 2180 - 2188
  • [9] Where and How: Mitigating Confusion in Neural Radiance Fields from Sparse Inputs
    Bao, Yanqi
    Li, Yuxin
    Huo, Jing
    Ding, Tianyu
    Liang, Xinyue
    Li, Wenbin
    Gao, Yang
    MM 2023 - Proceedings of the 31st ACM International Conference on Multimedia, 2023, : 2180 - 2188
  • [10] DaRF: Boosting Radiance Fields from Sparse Inputs with Monocular Depth Adaptation
    Song, Jiuhn
    Park, Seonghoon
    An, Honggyu
    Cho, Seokju
    Kwak, Min-Seop
    Cho, Sungjin
    Kim, Seungryong
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,