Structure Flow-Guided Network for Real Depth Super-resolution

被引:0
|
作者
Yuan, Jiayi [1 ]
Jiang, Haobo [1 ]
Li, Xiang [1 ]
Qian, Jianjun [1 ]
Li, Jun [1 ]
Yang, Jian [1 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Jiangsu Key Lab Image & Video Understanding Socia, PCA Lab,Key Lab Intelligent Percept & Syst HighDi, Nanjing, Peoples R China
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Real depth super-resolution (DSR), unlike synthetic settings, is a challenging task due to the structural distortion and the edge noise caused by the natural degradation in real-world low-resolution (LR) depth maps. These defeats result in significant structure inconsistency between the depth map and the RGB guidance, which potentially confuses the RGB-structure guidance and thereby degrades the DSR quality. In this paper, we propose a novel structure flow-guided DSR framework, where a cross-modality flow map is learned to guide the RGB-structure information transferring for precise depth upsampling. Specifically, our framework consists of a cross-modality flow-guided upsampling network (CFU-Net) and a flow-enhanced pyramid edge attention network (PEANet). CFUNet contains a trilateral self-attention module combining both the geometric and semantic correlations for reliable cross-modality flow learning. Then, the learned flow maps are combined with the grid-sampling mechanism for coarse high-resolution (HR) depth prediction. PEANet targets at integrating the learned flow map as the edge attention into a pyramid network to hierarchically learn the edge-focused guidance feature for depth edge refinement. Extensive experiments on real and synthetic DSR datasets verify that our approach achieves excellent performance compared to state-of-the-art methods. Our code is available at: https://github.com/Yuanjiayii/DSR SFG.
引用
收藏
页码:3340 / 3348
页数:9
相关论文
共 50 条
  • [1] FLOW-GUIDED DEFORMABLE ATTENTION NETWORK FOR FAST ONLINE VIDEO SUPER-RESOLUTION
    Yang, Xi
    Zhang, Xindong
    Zhang, Lei
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 390 - 394
  • [2] FGBRSN: Flow-Guided Gated Bi-Directional Recurrent Separated Network for Video Super-Resolution
    Xue, Weikang
    Gao, Lihang
    Hu, Shuiyi
    Yu, Tianqi
    Hu, Jianling
    IEEE ACCESS, 2023, 11 : 103419 - 103430
  • [3] Digging into depth-adaptive structure for guided depth super-resolution
    Hou, Yue
    Nie, Lang
    Lin, Chunyu
    Guo, Baoqing
    Zhao, Yao
    DISPLAYS, 2024, 84
  • [4] BSRT: Improving Burst Super-Resolution with Swin Transformer and Flow-Guided Deformable Alignment
    Luo, Ziwei
    Li, Youwei
    Cheng, Shen
    Yu, Lei
    Wu, Qi
    Wen, Zhihong
    Fan, Haoqiang
    Sun, Jian
    Liu, Shuaicheng
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 997 - 1007
  • [5] Hierarchical Edge Refinement Network for Guided Depth Map Super-Resolution
    Zhang, Shuo
    Pan, Zexu
    Lv, Yichang
    Lin, Youfang
    IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, 2024, 10 : 469 - 478
  • [6] Discrete Cosine Transform Network for Guided Depth Map Super-Resolution
    Zhao, Zixiang
    Zhang, Jiangshe
    Xu, Shuang
    Lin, Zudi
    Pfister, Hanspeter
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 5687 - 5697
  • [7] ATMNet: Adaptive Texture Migration Network for Guided Depth Super-Resolution
    Guo, Kehua
    Tan, Xuyang
    Zhu, Xiangyuan
    Guo, Shaojun
    Xi, Zhipeng
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2025, 21 (01)
  • [8] Guided Depth Map Super-Resolution Using Recumbent Y Network
    Li, Tao
    Dong, Xiucheng
    Lin, Hongwei
    IEEE ACCESS, 2020, 8 : 122695 - 122708
  • [9] DSRNet: Depth Super-Resolution Network guided by blurry depth and clear intensity edges
    Lan, Hui
    Jung, Cheolkon
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2024, 121
  • [10] Guided Depth Map Super-Resolution: A Survey
    Zhong, Zhiwei
    Liu, Xianming
    Jiang, Junjun
    Zhao, Debin
    Ji, Xiangyang
    ACM COMPUTING SURVEYS, 2023, 55 (14S)