Pyramid Attention Network for Image Restoration

被引:46
|
作者
Mei, Yiqun [1 ]
Fan, Yuchen [2 ]
Zhang, Yulun [3 ]
Yu, Jiahui [4 ]
Zhou, Yuqian [5 ]
Liu, Ding [6 ]
Fu, Yun [7 ]
Huang, Thomas S. [8 ]
Shi, Humphrey [9 ,10 ,11 ,12 ]
机构
[1] Jonhs Hopkins Univ, Baltimore, MD USA
[2] Meta Real Labs, Menlo Pk, CA USA
[3] Swiss Fed Inst Technol, Zurich, Switzerland
[4] Google Brain, Bellevue, WA USA
[5] Adobe, Seattle, WA USA
[6] ByteDance, Mountain View, CA USA
[7] Northeastern Univ, Boston, MA USA
[8] UIUC, Champaign, IL 61801 USA
[9] Georgia Tech, Atlanta, GA 30332 USA
[10] UIUC, Atlanta, GA 61801 USA
[11] UO, Atlanta, GA 30342 USA
[12] PicsArt, Atlanta, GA 94105 USA
关键词
Image restoration; Image denoising; Demosaicing; Compression artifact reduction; Super-resolution;
D O I
10.1007/s11263-023-01843-5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Self-similarity refers to the image prior widely used in image restoration algorithms that small but similar patterns tend to occur at different locations and scales. However, recent advanced deep convolutional neural network-based methods for image restoration do not take full advantage of self-similarities by relying on self-attention neural modules that only process information at the same scale. To solve this problem, we present a novel Pyramid Attention module for image restoration, which captures long-range feature correspondences from a multi-scale feature pyramid. Inspired by the fact that corruptions, such as noise or compression artifacts, drop drastically at coarser image scales, our attention module is designed to be able to borrow clean signals from their "clean" correspondences at the coarser levels. The proposed pyramid attention module is a generic building block that can be flexibly integrated into various neural architectures. Its effectiveness is validated through extensive experiments on multiple image restoration tasks: image denoising, demosaicing, compression artifact reduction, and super resolution. Without any bells and whistles, our PANet (pyramid attention module with simple network backbones) can produce state-of-the-art results with superior accuracy and visual quality. Our code is available at
引用
收藏
页码:3207 / 3225
页数:19
相关论文
共 50 条
  • [41] Strip Attention for Image Restoration
    Cui, Yuning
    Tao, Yi
    Jing, Luoxi
    Knoll, Alois
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 645 - 653
  • [42] Local Attention Pyramid for Scene Image Generation
    Shim, Sang-Heon
    Hyun, Sangeek
    Bae, DaeHyun
    Heo, Jae-Pil
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 7764 - 7772
  • [43] Integrating multi-scale attention network with deep image prior for single image restoration
    Cynthia Devi Arumugam
    Balaji Banothu
    International Journal of Information Technology, 2025, 17 (1) : 555 - 565
  • [44] Dual Attention Based Feature Pyramid Network
    Xing, Huijun
    Wang, Shuai
    Zheng, Dezhi
    Zhao, Xiaotong
    CHINA COMMUNICATIONS, 2020, 17 (08) : 242 - 252
  • [45] Pyramid Feature Attention Network for Saliency detection
    Zhao, Ting
    Wu, Xiangqian
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 3080 - 3089
  • [46] Global Attention Pyramid Network for Semantic Segmentation
    Zhang, Na
    Li, Jun
    Li, Yongrui
    Du, Yang
    PROCEEDINGS OF THE 38TH CHINESE CONTROL CONFERENCE (CCC), 2019, : 8728 - 8732
  • [47] Stacked Pyramid Attention Network for Object Detection
    Hao, Shijie
    Wang, Zhonghao
    Sun, Fuming
    NEURAL PROCESSING LETTERS, 2022, 54 (04) : 2759 - 2782
  • [48] Stacked Pyramid Attention Network for Object Detection
    Shijie Hao
    Zhonghao Wang
    Fuming Sun
    Neural Processing Letters, 2022, 54 : 2759 - 2782
  • [49] Dual Attention Based Feature Pyramid Network
    Huijun Xing
    Shuai Wang
    Dezhi Zheng
    Xiaotong Zhao
    中国通信, 2020, 17 (08) : 242 - 252
  • [50] Efficient Attention Pyramid Network for Semantic Segmentation
    Yang, Qirui
    Ku, Tao
    Hu, Kunyuan
    IEEE ACCESS, 2021, 9 : 18867 - 18875