Hole Filling for View Synthesis Using Depth Guided Global Optimization

被引:14
|
作者
Luo, Guibo [1 ]
Zhu, Yuesheng [1 ]
机构
[1] Peking Univ, Shenzhen Grad Sch, Commun & Informat Secur Lab, Shenzhen 518055, Peoples R China
来源
IEEE ACCESS | 2018年 / 6卷
关键词
View synthesis; hole filling; depth image based rendering; trusted contents; global optimization; QUALITY ASSESSMENT; IMAGE COMPLETION; OBJECT REMOVAL; VIDEO; COMPRESSION;
D O I
10.1109/ACCESS.2018.2847312
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
View synthesis is an effective way to generate multi-view contents from a limited number of views, and can be utilized for 2-D-to-3-D video conversion, multi-view video compression, and virtual reality. In the view synthesis techniques, depth-image-based rendering (DIBR) is an important method to generate virtual view from video-plus-depth sequence. However, some holes might be produced in the DIBR process. Many hole filling methods have been proposed to tackle this issue, but most of them cannot achieve globally coherent or acquire trusted contents. In this paper, a hole filling method with depth-guided global optimization is proposed for view synthesis. The global optimization is achieved by iterating the spatio-temporal approximate nearest neighbor (ANN) search and video reconstruction step. Directly applying global optimization might introduce some foreground artifacts to the synthesized video. To prevent this problem, some strategies have been developed in this paper. The depth information is applied to guide the spatio-temporal ANN searching and the initialization step is specified in the global optimization procedure. Our experimental results have demonstrated that the proposed method has better performance compared with other methods in terms of visual quality, trusted textures, and temporal consistency in the synthesized video.
引用
收藏
页码:32874 / 32889
页数:16
相关论文
共 50 条
  • [31] Multi-View Stereo and Depth Priors Guided NeRF for View Synthesis
    Deng, Wang
    Zhang, Xuetao
    Guo, Yu
    Lu, Zheng
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 3922 - 3928
  • [32] Virtual view synthesis using layered depth image generation and depth-based inpainting for filling disocclusions and translucent disocclusions
    Muddala, Suryanarayana M.
    Sjostrom, Marten
    Olsson, Roger
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2016, 38 : 351 - 366
  • [33] Multi-view depth video coding using depth view synthesis
    Na, Sang-Tae
    Oh, Kwan-Jung
    Lee, Cheon
    Ho, Yo-Sung
    PROCEEDINGS OF 2008 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, VOLS 1-10, 2008, : 1400 - 1403
  • [34] Hole Filling and Optimization Algorithm for Depth Images Based on Adaptive Joint Bilateral Filtering
    Wang Decheng
    Chen Xiangning
    Yi Hui
    Zhao Feng
    CHINESE JOURNAL OF LASERS-ZHONGGUO JIGUANG, 2019, 46 (10):
  • [35] NOVEL TEMPORAL DOMAIN HOLE FILLING BASED ON BACKGROUND MODELING FOR VIEW SYNTHESIS
    Sun, Wenxiu
    Au, Oscar C.
    Xu, Lingfeng
    Li, Yujun
    Hu, Wei
    2012 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP 2012), 2012, : 2721 - 2724
  • [36] DICTIONARY BASED HOLE FILLING WITH ASSISTANCE OF DEPTH
    Qiao, Yiguo
    Jung, Cheolkon
    2014 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2014,
  • [37] FTV Format Using Global View and Depth Map
    Ishibashi, Takashi
    Tehrani, Mehrdad Panahpour
    Fujii, Toshiaki
    Tanimoto, Masayuki
    2012 PICTURE CODING SYMPOSIUM (PCS), 2012, : 29 - 32
  • [38] Novel View Synthesis via Depth-guided Skip Connections
    Hou, Yuxin
    Solin, Arno
    Kannala, Juho
    2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WACV 2021, 2021, : 3118 - 3127
  • [39] Hole-Filling of RealSense Depth Images Using a Color Edge Map
    Cho, Ji-Min
    Park, Soon-Yong
    Chien, Sung-Il
    IEEE ACCESS, 2020, 8 : 53901 - 53914
  • [40] RDNeRF: relative depth guided NeRF for dense free view synthesis
    Jiaxiong Qiu
    Yifan Zhu
    Peng-Tao Jiang
    Ming-Ming Cheng
    Bo Ren
    The Visual Computer, 2024, 40 : 1485 - 1497