Hierarchical Features Driven Residual Learning for Depth Map Super-Resolution

被引:157
|
作者
Guo, Chunle [1 ]
Li, Chongyi [1 ]
Guo, Jichang [1 ]
Cong, Runmin [1 ]
Fu, Huazhu [2 ]
Han, Ping [3 ]
机构
[1] Tianjin Univ, Sch Elect & Informat Engn, Tianjin 300072, Peoples R China
[2] Incept Inst Artificial Intelligence, Abu Dhabi, U Arab Emirates
[3] Civil Aviat Univ China, Coll Elect Informat & Automat, Tianjin 300300, Peoples R China
基金
中国国家自然科学基金;
关键词
Convolutional neural network (CNN); depth map super-resolution (SR); residual learning; image reconstruction; RESOLUTION;
D O I
10.1109/TIP.2018.2887029
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Rapid development of affordable and portable consumer depth cameras facilitates the use of depth information in many computer vision tasks such as intelligent vehicles and 3D reconstruction. However, depth map captured by low-cost depth sensors (e.g., Kinect) usually suffers from low spatial resolution, which limits its potential applications. In this paper, we propose a novel deep network for depth map super-resolution (SR), called DepthSR-Net. The proposed DepthSR-Net automatically infers a high-resolution (HR) depth map from its low-resolution (LR) version by hierarchical features driven residual learning. Specifically, DepthSR-Net is built on residual U-Net deep network architecture. Given LR depth map, we first obtain the desired HR by bicubic interpolation upsampling and then construct an input pyramid to achieve multiple level receptive fields. Next, we extract hierarchical features from the input pyramid, intensity image, and encoder-decoder structure of U-Net. Finally, we learn the residual between the interpolated depth map and the corresponding HR one using the rich hierarchical features. The final HR depth map is achieved by adding the learned residual to the interpolated depth map. We conduct an ablation study to demonstrate the effectiveness of each component in the proposed network. Extensive experiments demonstrate that the proposed method outperforms the state-of-the-art methods. In addition, the potential usage of the proposed network in other low-level vision problems is discussed.
引用
收藏
页码:2545 / 2557
页数:13
相关论文
共 50 条
  • [31] Depth Map Super-Resolution Using Guided Deformable Convolution
    Kim, Joon-Yeon
    Ji, Seowon
    Baek, Seung-Jin
    Jung, Seung-Won
    Ko, Sung-Jea
    IEEE ACCESS, 2021, 9 : 66626 - 66635
  • [32] Pyramid-Structured Depth MAP Super-Resolution Based on Deep Dense-Residual Network
    Huang, Liqin
    Zhang, Jianjia
    Zuo, Yifan
    Wu, Qiang
    IEEE SIGNAL PROCESSING LETTERS, 2019, 26 (12) : 1723 - 1727
  • [33] Color Guided Depth Map Super-Resolution based on a Deep Self-Learning Approach
    Takeda, Kyohei
    Iwamoto, Yutaro
    Chen, Yen-Wei
    2020 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS (ICCE), 2020, : 588 - 591
  • [34] Single Image Super-resolution Based on Residual Learning
    Xie, Chao
    Lu, Xiaobo
    PROCEEDINGS OF 2017 INTERNATIONAL CONFERENCE ON VIDEO AND IMAGE PROCESSING (ICVIP 2017), 2017, : 124 - 129
  • [35] Multi-Direction Dictionary Learning Based Depth Map Super-Resolution With Autoregressive Modeling
    Wang, Jin
    Xu, Wei
    Cai, Jian-Feng
    Zhu, Qing
    Shi, Yunhui
    Yin, Baocai
    IEEE TRANSACTIONS ON MULTIMEDIA, 2020, 22 (06) : 1470 - 1484
  • [36] Edge Orientation Driven Depth Super-Resolution for View Synthesis
    Yao, Chao
    Xiao, Jimin
    Jin, Jian
    Ban, Xiaojuan
    IMAGE AND GRAPHICS, ICIG 2019, PT III, 2019, 11903 : 107 - 121
  • [37] Deep networks for image super-resolution using hierarchical features
    Yang, Xin
    Zhang, Yifan
    Zhou, Dake
    BULLETIN OF THE POLISH ACADEMY OF SCIENCES-TECHNICAL SCIENCES, 2022, 70 (01)
  • [38] Depth Map Super-Resolution via Extended Weighted Mode Filtering
    Fu, Mingliang
    Zhou, Weijia
    2016 30TH ANNIVERSARY OF VISUAL COMMUNICATION AND IMAGE PROCESSING (VCIP), 2016,
  • [39] Depth Map Super-Resolution Reconstruction Based on Convolutional Neural Networks
    Li S.
    Lei G.
    Fan R.
    Lei, Guoqing (lgq20051118@163.com), 2017, Chinese Optical Society (37):
  • [40] Edge-Preserving Depth Map Super-Resolution with Intensity Guidance
    Xiaochuan Wang
    Xiaohui Liang
    JournalofBeijingInstituteofTechnology, 2019, 28 (01) : 51 - 56