Guided Depth Map Super-Resolution Using Recumbent Y Network

被引:9
|
作者
Li, Tao [1 ]
Dong, Xiucheng [1 ]
Lin, Hongwei [2 ]
机构
[1] Xihua Univ, Sch Elect Engn & Elect Informat, Chengdu 610039, Peoples R China
[2] Northwest Minzu Univ, Coll Elect Engn, Lanzhou 730000, Peoples R China
基金
中国国家自然科学基金;
关键词
Depth map super-resolution; convolutional neural network; UNet network; atrous spatial pyramid pooling; attention mechanism;
D O I
10.1109/ACCESS.2020.3007667
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Low spatial resolution is a well-known problem for depth maps captured by low-cost consumer depth cameras. Depth map super-resolution (SR) can be used to enhance the resolution and improve the quality of depth maps. In this paper, we propose a recumbent Y network (RYNet) to integrate the depth information and intensity information for depth map SR. Specifically, we introduce two weight-shared encoders to respectively learn multi-scale depth and intensity features, and a single decoder to gradually fuse depth information and intensity information for reconstruction. We also design a residual channel attention based atrous spatial pyramid pooling structure to further enrich the feature's scale diversity and exploit the correlations between multi-scale feature channels. Furthermore, the violations of co-occurrence assumption between depth discontinuities and intensity edges will generate texture-transfer and depth-bleeding artifacts. Thus, we propose a spatial attention mechanism to mitigate the artifacts by adaptively learning the spatial relevance between intensity features and depth features and reweighting the intensity features before fusion. Experimental results demonstrate the superiority of the proposed RYNet over several state-of-the-art depth map SR methods.
引用
收藏
页码:122695 / 122708
页数:14
相关论文
共 50 条
  • [21] ATMNet: Adaptive Texture Migration Network for Guided Depth Super-Resolution
    Guo, Kehua
    Tan, Xuyang
    Zhu, Xiangyuan
    Guo, Shaojun
    Xi, Zhipeng
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2025, 21 (01)
  • [22] Structure Flow-Guided Network for Real Depth Super-resolution
    Yuan, Jiayi
    Jiang, Haobo
    Li, Xiang
    Qian, Jianjun
    Li, Jun
    Yang, Jian
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 3, 2023, : 3340 - 3348
  • [23] BridgeNet: A Joint Learning Network of Depth Map Super-Resolution and Monocular Depth Estimation
    Tang, Qi
    Cong, Runmin
    Sheng, Ronghui
    He, Lingzhi
    Zhang, Dan
    Zhao, Yao
    Kwong, Sam
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 2148 - 2157
  • [24] DSRNet: Depth Super-Resolution Network guided by blurry depth and clear intensity edges
    Lan, Hui
    Jung, Cheolkon
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2024, 121
  • [25] Multi-Scale Frequency Reconstruction for Guided Depth Map Super-Resolution via Deep Residual Network
    Zuo, Yifan
    Wu, Qiang
    Fang, Yuming
    An, Ping
    Huang, Liqin
    Chen, Zhifeng
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2020, 30 (02) : 297 - 306
  • [26] Color-guided depth map super resolution using joint convolutional neural network
    Zhang, Ziyue
    Jin, Weiqi
    Li, Yingjie
    AOPC 2020: DISPLAY TECHNOLOGY; PHOTONIC MEMS, THZ MEMS, AND METAMATERIALS; AND AI IN OPTICS AND PHOTONICS, 2020, 11565
  • [27] Learning content-aware feature fusion for guided depth map super-resolution
    Zuo, Yifan
    Wang, Hao
    Xu, Yaping
    Huang, Huimin
    Huang, Xiaoshui
    Xia, Xue
    Fang, Yuming
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2024, 126
  • [28] Depth map super-resolution based on edge-guided joint trilateral upsampling
    Yang, Shuyuan
    Cao, Ning
    Guo, Bin
    Li, Gang
    VISUAL COMPUTER, 2022, 38 (03): : 883 - 895
  • [29] Depth map super-resolution based on edge-guided joint trilateral upsampling
    Shuyuan Yang
    Ning Cao
    Bin Guo
    Gang Li
    The Visual Computer, 2022, 38 : 883 - 895
  • [30] Depth map super-resolution via low-resolution depth guided joint trilateral up-sampling
    Yuan, Liang
    Jin, Xin
    Li, Yangguang
    Yuan, Chun
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2017, 46 : 280 - 291