Guided Depth Map Super-Resolution Using Recumbent Y Network

被引:9
|
作者
Li, Tao [1 ]
Dong, Xiucheng [1 ]
Lin, Hongwei [2 ]
机构
[1] Xihua Univ, Sch Elect Engn & Elect Informat, Chengdu 610039, Peoples R China
[2] Northwest Minzu Univ, Coll Elect Engn, Lanzhou 730000, Peoples R China
基金
中国国家自然科学基金;
关键词
Depth map super-resolution; convolutional neural network; UNet network; atrous spatial pyramid pooling; attention mechanism;
D O I
10.1109/ACCESS.2020.3007667
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Low spatial resolution is a well-known problem for depth maps captured by low-cost consumer depth cameras. Depth map super-resolution (SR) can be used to enhance the resolution and improve the quality of depth maps. In this paper, we propose a recumbent Y network (RYNet) to integrate the depth information and intensity information for depth map SR. Specifically, we introduce two weight-shared encoders to respectively learn multi-scale depth and intensity features, and a single decoder to gradually fuse depth information and intensity information for reconstruction. We also design a residual channel attention based atrous spatial pyramid pooling structure to further enrich the feature's scale diversity and exploit the correlations between multi-scale feature channels. Furthermore, the violations of co-occurrence assumption between depth discontinuities and intensity edges will generate texture-transfer and depth-bleeding artifacts. Thus, we propose a spatial attention mechanism to mitigate the artifacts by adaptively learning the spatial relevance between intensity features and depth features and reweighting the intensity features before fusion. Experimental results demonstrate the superiority of the proposed RYNet over several state-of-the-art depth map SR methods.
引用
收藏
页码:122695 / 122708
页数:14
相关论文
共 50 条
  • [31] Joint Residual Pyramid for Depth Map Super-Resolution
    Xiao, Yi
    Cao, Xiang
    Zheng, Yan
    Zhu, Xianyi
    PRICAI 2018: TRENDS IN ARTIFICIAL INTELLIGENCE, PT I, 2018, 11012 : 797 - 810
  • [32] JOINT TRILATERAL FILTERING FOR DEPTH MAP SUPER-RESOLUTION
    Lo, Kai-Han
    Wang, Yu-Chiang Frank
    Hua, Kai-Lung
    2013 IEEE INTERNATIONAL CONFERENCE ON VISUAL COMMUNICATIONS AND IMAGE PROCESSING (IEEE VCIP 2013), 2013,
  • [33] Super-Resolution of Depth Map Exploiting Planar Surfaces
    Tilo, Tammam
    Jin, Zhi
    Cheng, Fei
    ADVANCES IN MULTIMEDIA INFORMATION PROCESSING - PCM 2015, PT II, 2015, 9315 : 632 - 641
  • [34] Guided Depth Super-Resolution by Deep Anisotropic Diffusion
    Metzger, Nando
    Daudt, Rodrigo Caye
    Schindler, Konrad
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 18237 - 18246
  • [35] An Image Guided Algorithm for Range Map Super-resolution
    Lei, Jieyu
    Han, Shaokun
    Xia, Wenze
    Wang, Liang
    Zhai, Yu
    2017 INTERNATIONAL CONFERENCE ON OPTICAL INSTRUMENTS AND TECHNOLOGY: OPTOELECTRONIC IMAGING/SPECTROSCOPY AND SIGNAL PROCESSING TECHNOLOGY, 2017, 10620
  • [36] Deep edge map guided depth super resolution
    Jiang, Zhongyu
    Yue, Huanjing
    Lai, Yu-Kun
    Yang, Jingyu
    Hou, Yonghong
    Hou, Chunping
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2021, 90
  • [37] Color Guided Depth Map Super-Resolution based on a Deep Self-Learning Approach
    Takeda, Kyohei
    Iwamoto, Yutaro
    Chen, Yen-Wei
    2020 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS (ICCE), 2020, : 588 - 591
  • [38] A Channel Multi-Scale Fusion Network for Scene Depth Map Super-Resolution
    Yongwei M.
    Xinjie Z.
    Hanshi R.
    Jiajing Z.
    Shusen S.
    Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, 2023, 35 (01): : 37 - 47
  • [39] Depth Map Super-Resolution Based on Two-Channel Convolutional Neural Network
    Li Sumei
    Lei Guoqing
    Fan Ru
    ACTA OPTICA SINICA, 2018, 38 (10)
  • [40] Explainable Unfolding Network For Joint Edge-Preserving Depth Map Super-Resolution
    Zhang, Jialong
    Zhao, Lijun
    Zhang, Jinjing
    Wang, Ke
    Wang, Anhong
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 888 - 893