Discrete Cosine Transform Network for Guided Depth Map Super-Resolution

被引:66
|
作者
Zhao, Zixiang [1 ,2 ]
Zhang, Jiangshe [1 ]
Xu, Shuang [1 ,3 ]
Lin, Zudi [2 ]
Pfister, Hanspeter [2 ]
机构
[1] Xi An Jiao Tong Univ, Xian, Peoples R China
[2] Harvard Univ, Cambridge, MA 02138 USA
[3] Northwestern Polytech Univ, Xian, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/CVPR52688.2022.00561
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Guided depth super-resolution (GDSR) is an essential topic in multi-modal image processing, which reconstructs high-resolution (HR) depth maps from low-resolution ones collected with suboptimal conditions with the help of HR RGB images of the same scene. To solve the challenges in interpreting the working mechanism, extracting cross-modal features and RGB texture over-transferred, we propose a novel Discrete Cosine Transform Network (DCTNet) to alleviate the problems from three aspects. First, the Discrete Cosine Transform (DCT) module reconstructs the multi-channel HR depth features by using DCT to solve the channel-wise optimization problem derived from the image domain. Second, we introduce a semi-coupled feature extraction module that uses shared convolutional kernels to extract common information and private kernels to extract modality-specific information. Third, we employ an edge attention mechanism to highlight the contours informative for guided upsampling. Extensive quantitative and qualitative evaluations demonstrate the effectiveness of our DCTNet, which outperforms previous state-of-the-art methods with a relatively small number of parameters. The code is available at https:// github.com/Zhaozixiang1228/GDSR- DCTNet.
引用
收藏
页码:5687 / 5697
页数:11
相关论文
共 50 条
  • [41] Explainable Unfolding Network For Joint Edge-Preserving Depth Map Super-Resolution
    Zhang, Jialong
    Zhao, Lijun
    Zhang, Jinjing
    Wang, Ke
    Wang, Anhong
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 888 - 893
  • [42] DAEANet: Dual auto-encoder attention network for depth map super-resolution
    Cao, Xiang
    Luo, Yihao
    Zhu, Xianyi
    Zhang, Liangqi
    Xu, Yan
    Shen, Haibo
    Wang, Tianjiang
    Feng, Qi
    NEUROCOMPUTING, 2021, 454 : 350 - 360
  • [43] Digging into depth-adaptive structure for guided depth super-resolution
    Hou, Yue
    Nie, Lang
    Lin, Chunyu
    Guo, Baoqing
    Zhao, Yao
    DISPLAYS, 2024, 84
  • [44] CGFTNet: Content-Guided Frequency Domain Transform Network for Face Super-Resolution
    School of Computer Science and Technology, Xinjiang University, Urumqi
    830046, China
    Information, 12
  • [45] DEPTH SUPER-RESOLUTION WITH DEEP EDGE-INFERENCE NETWORK AND EDGE-GUIDED DEPTH FILLING
    Ye, Xinchen
    Duan, Xiangyue
    Li, Haojie
    2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 1398 - 1402
  • [46] Color-Guided Depth Map Super Resolution Using Convolutional Neural Network
    Ni, Min
    Lei, Jianjun
    Cong, Runmin
    Zheng, Kaifu
    Peng, Bo
    Fan, Xiaoting
    IEEE ACCESS, 2017, 5 : 26666 - 26672
  • [47] MIG-Net: Multi-Scale Network Alternatively Guided by Intensity and Gradient Features for Depth Map Super-Resolution
    Zuo, Yifan
    Wang, Hao
    Fang, Yuming
    Huang, Xiaoshui
    Shang, Xiwu
    Wu, Qiang
    IEEE TRANSACTIONS ON MULTIMEDIA, 2021, 24 : 3506 - 3519
  • [48] Progressive Multi-scale Reconstruction for Guided Depth Map Super-Resolution via Deep Residual Gate Fusion Network
    Wen, Yang
    Wang, Jihong
    Li, Zhen
    Sheng, Bin
    Li, Ping
    Chi, Xiaoyu
    Mao, Lijuan
    ADVANCES IN COMPUTER GRAPHICS, CGI 2021, 2021, 13002 : 67 - 79
  • [49] Spatio-temporal Super-Resolution Using Depth Map
    Awatsu, Yusaku
    Kawai, Norihiko
    Sato, Tomokazu
    Yokoya, Naokazu
    IMAGE ANALYSIS, PROCEEDINGS, 2009, 5575 : 696 - 705
  • [50] DEPTH-MAP SUPER-RESOLUTION FOR ASYMMETRIC STEREO IMAGES
    Garcia, Diogo C.
    Dorea, Camilo
    de Queiroz, Ricardo L.
    2013 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2013, : 1548 - 1552