A Multi-Attention Feature Distillation Neural Network for Lightweight Single Image Super-Resolution

被引:0
|
作者
Zhang, Yongfei [1 ,2 ]
Lin, Xinying [1 ,3 ]
Yang, Hong [1 ]
He, Jie [4 ]
Qing, Linbo [1 ]
He, Xiaohai [1 ]
Li, Yi [5 ]
Chen, Honggang [1 ,6 ]
机构
[1] Sichuan Univ, Coll Elect & Informat Engn, Chengdu 610065, Peoples R China
[2] Guangxi Normal Univ, Guangxi Key Lab Multisource Informat Min & Secur, Guilin 541004, Peoples R China
[3] Tianjin Univ Technol, Minist Educ, Key Lab Comp Vis & Syst, Tianjin 300384, Peoples R China
[4] Wuzhou Univ, Guangxi Key Lab Machine Vis & Intelligent Control, Wuzhou 543002, Peoples R China
[5] DI Sinma Sichuan Machinery Co Ltd, Suining 629201, Peoples R China
[6] Yunnan Univ, Yunnan Key Lab Software Engn, Kunming 650600, Peoples R China
基金
中国国家自然科学基金;
关键词
SPARSE REPRESENTATION; INTERPOLATION;
D O I
10.1155/2024/3255233
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, remarkable performance improvements have been produced by deep convolutional neural networks (CNN) for single image super-resolution (SISR). Nevertheless, a high proportion of CNN-based SISR models are with quite a few network parameters and high computational complexity for deep or wide architectures. How to more fully utilize deep features to make a balance between model complexity and reconstruction performance is one of the main challenges in this field. To address this problem, on the basis of the well-known information multi-distillation model, a multi-attention feature distillation network termed as MAFDN is developed for lightweight and accurate SISR. Specifically, an effective multi-attention feature distillation block (MAFDB) is designed and used as the basic feature extraction unit in MAFDN. With the help of multi-attention layers including pixel attention, spatial attention, and channel attention, MAFDB uses multiple information distillation branches to learn more discriminative and representative features. Furthermore, MAFDB introduces the depthwise over-parameterized convolutional layer (DO-Conv)-based residual block (OPCRB) to enhance its ability without incurring any parameter and computation increase in the inference stage. The results on commonly used datasets demonstrate that our MAFDN outperforms existing representative lightweight SISR models when taking both reconstruction performance and model complexity into consideration. For example, for x4 SR on Set5, MAFDN (597K/33.79G) obtains 0.21 dB/0.0037 and 0.10 dB/0.0015 PSNR/SSIM gains over the attention-based SR model AFAN (692K/50.90G) and the feature distillation-based SR model DDistill-SR (675K/32.83G), respectively.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Multi-attention augmented network for single image super-resolution
    Chen, Rui
    Zhang, Heng
    Liu, Jixin
    PATTERN RECOGNITION, 2022, 122
  • [2] Lightweight multi-scale distillation attention network for image super-resolution
    Tang, Yinggan
    Hu, Quanwei
    Bu, Chunning
    KNOWLEDGE-BASED SYSTEMS, 2025, 309
  • [3] Balanced Spatial Feature Distillation and Pyramid Attention Network for Lightweight Image Super-resolution
    Gendy, Garas
    Sabor, Nabil
    Hou, Jingchao
    He, Guanghui
    NEUROCOMPUTING, 2022, 509 (157-166) : 157 - 166
  • [4] Feature Distillation Interaction Weighting Network for Lightweight Image Super-resolution
    Gao, Guangwei
    Li, Wenjie
    Li, Juncheng
    Wu, Fei
    Lu, Huimin
    Yu, Yi
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 661 - 669
  • [5] An efficient feature reuse distillation network for lightweight image super-resolution
    Liu, Chunying
    Gao, Guangwei
    Wu, Fei
    Guo, Zhenhua
    Yu, Yi
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2024, 249
  • [6] Gated Multi-Attention Feedback Network for Medical Image Super-Resolution
    Shang, Jianrun
    Zhang, Xue
    Zhang, Guisheng
    Song, Wenhao
    Chen, Jinyong
    Li, Qilei
    Gao, Mingliang
    ELECTRONICS, 2022, 11 (21)
  • [7] Lightweight Feature Fusion Network for Single Image Super-Resolution
    Yang, Wenming
    Wang, Wei
    Zhang, Xuechen
    Sun, Shuifa
    Liao, Qingmin
    IEEE SIGNAL PROCESSING LETTERS, 2019, 26 (04) : 538 - 542
  • [8] Multi-attention fusion transformer for single-image super-resolution
    Li, Guanxing
    Cui, Zhaotong
    Li, Meng
    Han, Yu
    Li, Tianping
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [9] CFDN: cross-scale feature distillation network for lightweight single image super-resolution
    Mu, Zihan
    Zhu, Ge
    Tang, Jinping
    MULTIMEDIA SYSTEMS, 2024, 30 (06)
  • [10] Feature enhanced cascading attention network for lightweight image super-resolution
    Huang, Feng
    Liu, Hongwei
    Chen, Liqiong
    Shen, Ying
    Yu, Min
    SCIENTIFIC REPORTS, 2025, 15 (01):