Structure Flow-Guided Network for Real Depth Super-resolution

被引:0
|
作者
Yuan, Jiayi [1 ]
Jiang, Haobo [1 ]
Li, Xiang [1 ]
Qian, Jianjun [1 ]
Li, Jun [1 ]
Yang, Jian [1 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Jiangsu Key Lab Image & Video Understanding Socia, PCA Lab,Key Lab Intelligent Percept & Syst HighDi, Nanjing, Peoples R China
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Real depth super-resolution (DSR), unlike synthetic settings, is a challenging task due to the structural distortion and the edge noise caused by the natural degradation in real-world low-resolution (LR) depth maps. These defeats result in significant structure inconsistency between the depth map and the RGB guidance, which potentially confuses the RGB-structure guidance and thereby degrades the DSR quality. In this paper, we propose a novel structure flow-guided DSR framework, where a cross-modality flow map is learned to guide the RGB-structure information transferring for precise depth upsampling. Specifically, our framework consists of a cross-modality flow-guided upsampling network (CFU-Net) and a flow-enhanced pyramid edge attention network (PEANet). CFUNet contains a trilateral self-attention module combining both the geometric and semantic correlations for reliable cross-modality flow learning. Then, the learned flow maps are combined with the grid-sampling mechanism for coarse high-resolution (HR) depth prediction. PEANet targets at integrating the learned flow map as the edge attention into a pyramid network to hierarchically learn the edge-focused guidance feature for depth edge refinement. Extensive experiments on real and synthetic DSR datasets verify that our approach achieves excellent performance compared to state-of-the-art methods. Our code is available at: https://github.com/Yuanjiayii/DSR SFG.
引用
收藏
页码:3340 / 3348
页数:9
相关论文
共 50 条
  • [31] Structure and Texture Preserving Network for Real-World Image Super-Resolution
    Zhou, Bijun
    Yan, Huibin
    Wang, Shuoyao
    IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 2173 - 2177
  • [32] Photometric Depth Super-Resolution
    Haefner, Bjoern
    Peng, Songyou
    Verma, Alok
    Queau, Yvain
    Cremers, Daniel
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2020, 42 (10) : 2453 - 2464
  • [33] Semantic Segmentation Guided Real-World Super-Resolution
    Aakerberg, Andreas
    Johansen, Anders S.
    Nasrollahi, Kamal
    Moeslund, Thomas B.
    2022 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WORKSHOPS (WACVW 2022), 2022, : 449 - 458
  • [34] Learning Piecewise Planar Representation for RGB Guided Depth Super-Resolution
    Xu, Ruikang
    Yao, Mingde
    Guan, Yuanshen
    Xiong, Zhiwei
    IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, 2024, 10 : 1266 - 1279
  • [35] Fully Cross-Attention Transformer for Guided Depth Super-Resolution
    Ariav, Ido
    Cohen, Israel
    SENSORS, 2023, 23 (05)
  • [36] Edge-Guided Depth Image Super-Resolution Based on KSVD
    Liu, Binhui
    Ling, Qiang
    IEEE ACCESS, 2020, 8 : 41108 - 41115
  • [37] Spherical Space Feature Decomposition for Guided Depth Map Super-Resolution
    Zhao, Zixiang
    Zhang, Jiangshe
    Gu, Xiang
    Tan, Chengli
    Xu, Shuang
    Zhang, Yulun
    Timofte, Radu
    Van Gool, Luc
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 12513 - 12524
  • [38] Fractal Residual Network and Solutions for Real Super-Resolution
    Kwak, Junhyung
    Son, Donghee
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2019), 2019, : 2114 - 2121
  • [39] Depth-Controllable Very Deep Super-Resolution Network
    Kim, Dohyun
    Kim, Joongheon
    Kwon, Junseok
    Kim, The-Hyung
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [40] Depth Super-Resolution via Deep Controllable Slicing Network
    Ye, Xinchen
    Sun, Baoli
    Wang, Zhihui
    Yang, Jingyu
    Xu, Rui
    Li, Haojie
    Li, Baopu
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 1809 - 1818