Learning content-aware feature fusion for guided depth map super-resolution

被引:0
|
作者
Zuo, Yifan [1 ]
Wang, Hao [1 ]
Xu, Yaping [1 ]
Huang, Huimin [1 ]
Huang, Xiaoshui [2 ]
Xia, Xue [1 ]
Fang, Yuming [1 ]
机构
[1] Jiangxi Univ Finance & Econ, 665 Yuping West St, Nanchang 330013, Jiangxi, Peoples R China
[2] Shanghai Artificial Intelligence Lab, Yunjing Rd 701, Shanghai 200232, Peoples R China
关键词
Convolutional neural network; Joint trilateral filter; Guided depth map super-resolution; Content-dependent network;
D O I
10.1016/j.image.2024.117140
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
RGB-D data including paired RGB color images and depth maps is widely used in downstream computer vision tasks. However, compared with the acquisition of high -resolution color images, the depth maps captured by consumer-level sensors are always in low resolution. Within decades of research, the most state -of -the -art (SOTA) methods of depth map super -resolution cannot adaptively tune the guidance fusion for all feature positions by channel-wise feature concatenation with spatially sharing convolutional kernels. This paper proposes JTFNet to resolve this issue, which simulates the traditional Joint Trilateral Filter (JTF). Specifically, a novel JTF block is introduced to adaptively tune the fusion pattern between the color features and the depth features for all feature positions. Moreover, based on the variant of JTF block whose target features and guidance features are in the cross-scale shape, the fusion for depth features is performed in a bi-directional way. Therefore, the error accumulation along scales can be effectively mitigated by iteratively HR feature guidance. Compared with the SOTA methods, the sufficient experiment is conducted on the mainstream synthetic datasets and real datasets, i.e., Middlebury, NYU and ToF-Mark, which shows remarkable improvement of our JTFNet.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Image Super-Resolution With Content-Aware Feature Processing
    Mehta N.
    Murala S.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (01): : 179 - 191
  • [2] Spherical Space Feature Decomposition for Guided Depth Map Super-Resolution
    Zhao, Zixiang
    Zhang, Jiangshe
    Gu, Xiang
    Tan, Chengli
    Xu, Shuang
    Zhang, Yulun
    Timofte, Radu
    Van Gool, Luc
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 12513 - 12524
  • [3] Guided Depth Map Super-Resolution: A Survey
    Zhong, Zhiwei
    Liu, Xianming
    Jiang, Junjun
    Zhao, Debin
    Ji, Xiangyang
    ACM COMPUTING SURVEYS, 2023, 55 (14S)
  • [4] Joint-Feature Guided Depth Map Super-Resolution With Face Priors
    Yang, Shuai
    Liu, Jiaying
    Fang, Yuming
    Guo, Zongming
    IEEE TRANSACTIONS ON CYBERNETICS, 2018, 48 (01) : 399 - 411
  • [5] CADyQ: Content-Aware Dynamic Quantization for Image Super-Resolution
    Hong, Cheeun
    Baik, Sungyong
    Kim, Heewon
    Nah, Seungjun
    Lee, Kyoung Mu
    COMPUTER VISION, ECCV 2022, PT VII, 2022, 13667 : 367 - 383
  • [6] Joint Learning Content and Degradation Aware Feature for Blind Super-Resolution
    Zhou, Yifeng
    Lin, Churning
    Luo, Donghao
    Liu, Yong
    Tai, Ying
    Wang, Chengjie
    Chen, Mingang
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 2606 - 2616
  • [7] Content-Aware Local GAN for Photo-Realistic Super-Resolution
    Park, JoonKyu
    Son, Sanghyun
    Lee, Kyoung Mu
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 10551 - 10560
  • [8] Depth Map Super-Resolution Using Guided Deformable Convolution
    Kim, Joon-Yeon
    Ji, Seowon
    Baek, Seung-Jin
    Jung, Seung-Won
    Ko, Sung-Jea
    IEEE ACCESS, 2021, 9 : 66626 - 66635
  • [9] Deformable Enhancement and Adaptive Fusion for Depth Map Super-Resolution
    Liu, Peng
    Zhang, Zonghua
    Meng, Zhaozong
    Gao, Nan
    IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 204 - 208
  • [10] Degradation-Guided Multi-Modal Fusion Network for Depth Map Super-Resolution
    Han, Lu
    Wang, Xinghu
    Zhou, Fuhui
    Wu, Diansheng
    ELECTRONICS, 2024, 13 (20)