Robust Degradation Representation via Efficient Diffusion Model for Blind Super-Resolution

被引:0
|
作者
Ye, Fangchen [1 ]
Zhou, Yubo [1 ]
Cheng, Longyu [1 ]
Qu, Yanyun [1 ]
机构
[1] Xiamen Univ, Sch Informat, Xiamen, Peoples R China
来源
PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XI | 2024年 / 14435卷
基金
中国国家自然科学基金;
关键词
Blind SR; Diffusion model; Degradation representation;
D O I
10.1007/978-981-99-8552-4_3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Blind super-resolution (SR) is a challenging low-level vision task, dedicated to recovering corrupted details in low-resolution (LR) images with complex unknown degradations. The mainstream blind SR methods mainly adopt the paradigm of capturing the robust degradation representation from the LR images as condition and then perform deep feature reconstruction. However, the manifold degradation factors make it challenging to achieve flexible estimation. In this paper, we propose a residual-guided diffusion degradation representation scheme (Diff-BSR) for blind SR. Specifically, we leverage the powerful generative capability of the diffusion model (DM) to implicitly model the diverse degradations representation, which helps to resist to the disturbance of varied input. Meanwhile, to reduce the expensive computational complexity and training costs, we design a lightweight degradation extractor in the residual domain. It transforms the target residual distribution in a low-dimension feature space. As a result, Diff-BSR requires only about 60 sampling steps and a much smaller scale denoising network. Moreover, we designed the Degradation-Aware Multihead Self-Attention mechanism to effectively fuse the discriminative representations with the intermediate features of the network for robustness enhancement. Extensive experiments on mainstream blind SR benchmarks show that Diff-BSR achieves SOTA or comparable performance compared to existing methods.
引用
收藏
页码:26 / 38
页数:13
相关论文
共 50 条
  • [1] Generation diffusion degradation: Simple and efficient design for blind super-resolution
    Xu, Ling
    Zhou, Haoran
    Chen, Qiaochuan
    Li, Guangyao
    KNOWLEDGE-BASED SYSTEMS, 2024, 299
  • [2] Unsupervised Degradation Representation Learning for Blind Super-Resolution
    Wang, Longguang
    Wang, Yingqian
    Dong, Xiaoyu
    Xu, Qingyu
    Yang, Jungang
    An, Wei
    Guo, Yulan
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 10576 - 10585
  • [3] A boosted degradation representation learning for blind image super-resolution
    Tang, Yinggan
    Zhang, Xiang
    Bu, Chunning
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 133
  • [4] A boosted degradation representation learning for blind image super-resolution
    Tang, Yinggan
    Zhang, Xiang
    Bu, Chunning
    Engineering Applications of Artificial Intelligence, 2024, 133
  • [5] A boosted degradation representation learning for blind image super-resolution
    Tang, Yinggan
    Zhang, Xiang
    Bu, Chunning
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 133
  • [6] Efficient blind super-resolution imaging via adaptive degradation-aware estimation
    Yang, Haoran
    Li, Qilei
    Meng, Bin
    Jeon, Gwanggil
    Liu, Kai
    Yang, Xiaomin
    KNOWLEDGE-BASED SYSTEMS, 2024, 297
  • [7] Frequency aggregation network for blind super-resolution based on degradation representation
    Zhang, Yan
    Liu, Ziyang
    Liu, Shudong
    Sun, Yemei
    DIGITAL SIGNAL PROCESSING, 2023, 133
  • [8] Meta-Learning-Based Degradation Representation for Blind Super-Resolution
    Xia, Bin
    Tian, Yapeng
    Zhang, Yulun
    Hang, Yucheng
    Yang, Wenming
    Liao, Qingmin
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 3383 - 3396
  • [9] Blind super-resolution model based on degradation-aware
    Cai Jian-feng
    Jiang Nian-de
    CHINESE JOURNAL OF LIQUID CRYSTALS AND DISPLAYS, 2023, 38 (09) : 1224 - 1233
  • [10] Efficient Blind Image Super-Resolution
    Vais, Olga
    Makarov, Ilya
    ADVANCES IN COMPUTATIONAL INTELLIGENCE, IWANN 2023, PT II, 2023, 14135 : 229 - 240