A Multi-Attention Feature Distillation Neural Network for Lightweight Single Image Super-Resolution

被引:0
|
作者
Zhang, Yongfei [1 ,2 ]
Lin, Xinying [1 ,3 ]
Yang, Hong [1 ]
He, Jie [4 ]
Qing, Linbo [1 ]
He, Xiaohai [1 ]
Li, Yi [5 ]
Chen, Honggang [1 ,6 ]
机构
[1] Sichuan Univ, Coll Elect & Informat Engn, Chengdu 610065, Peoples R China
[2] Guangxi Normal Univ, Guangxi Key Lab Multisource Informat Min & Secur, Guilin 541004, Peoples R China
[3] Tianjin Univ Technol, Minist Educ, Key Lab Comp Vis & Syst, Tianjin 300384, Peoples R China
[4] Wuzhou Univ, Guangxi Key Lab Machine Vis & Intelligent Control, Wuzhou 543002, Peoples R China
[5] DI Sinma Sichuan Machinery Co Ltd, Suining 629201, Peoples R China
[6] Yunnan Univ, Yunnan Key Lab Software Engn, Kunming 650600, Peoples R China
基金
中国国家自然科学基金;
关键词
SPARSE REPRESENTATION; INTERPOLATION;
D O I
10.1155/2024/3255233
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, remarkable performance improvements have been produced by deep convolutional neural networks (CNN) for single image super-resolution (SISR). Nevertheless, a high proportion of CNN-based SISR models are with quite a few network parameters and high computational complexity for deep or wide architectures. How to more fully utilize deep features to make a balance between model complexity and reconstruction performance is one of the main challenges in this field. To address this problem, on the basis of the well-known information multi-distillation model, a multi-attention feature distillation network termed as MAFDN is developed for lightweight and accurate SISR. Specifically, an effective multi-attention feature distillation block (MAFDB) is designed and used as the basic feature extraction unit in MAFDN. With the help of multi-attention layers including pixel attention, spatial attention, and channel attention, MAFDB uses multiple information distillation branches to learn more discriminative and representative features. Furthermore, MAFDB introduces the depthwise over-parameterized convolutional layer (DO-Conv)-based residual block (OPCRB) to enhance its ability without incurring any parameter and computation increase in the inference stage. The results on commonly used datasets demonstrate that our MAFDN outperforms existing representative lightweight SISR models when taking both reconstruction performance and model complexity into consideration. For example, for x4 SR on Set5, MAFDN (597K/33.79G) obtains 0.21 dB/0.0037 and 0.10 dB/0.0015 PSNR/SSIM gains over the attention-based SR model AFAN (692K/50.90G) and the feature distillation-based SR model DDistill-SR (675K/32.83G), respectively.
引用
收藏
页数:14
相关论文
共 50 条
  • [21] Lightweight multi-scale attention feature distillation network for super-resolution reconstruction of digital rock images
    Zhang, Yubo
    Bi, Junhao
    Xu, Lei
    Xiang, Haibin
    Kong, Haihua
    Han, Chao
    GEOENERGY SCIENCE AND ENGINEERING, 2025, 246
  • [22] Lightweight image super-resolution with group-convolutional feature enhanced distillation network
    Wei Zhang
    Zhongqiang Fan
    Yan Song
    Yagang Wang
    International Journal of Machine Learning and Cybernetics, 2023, 14 : 2467 - 2482
  • [23] Lightweight image super-resolution with group-convolutional feature enhanced distillation network
    Zhang, Wei
    Fan, Zhongqiang
    Song, Yan
    Wang, Yagang
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2023, 14 (07) : 2467 - 2482
  • [24] A sparse lightweight attention network for image super-resolution
    Hongao Zhang
    Jinsheng Fang
    Siyu Hu
    Kun Zeng
    The Visual Computer, 2024, 40 (2) : 1261 - 1272
  • [25] A scalable attention network for lightweight image super-resolution
    Fang, Jinsheng
    Chen, Xinyu
    Zhao, Jianglong
    Zeng, Kun
    JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES, 2024, 36 (08)
  • [26] A sparse lightweight attention network for image super-resolution
    Zhang, Hongao
    Fang, Jinsheng
    Hu, Siyu
    Zeng, Kun
    VISUAL COMPUTER, 2024, 40 (02): : 1261 - 1272
  • [27] An efficient and lightweight image super-resolution with feature network
    Zang, Yongsheng
    Zhou, Dongming
    Wang, Changcheng
    Nie, Rencan
    Guo, Yanbu
    OPTIK, 2022, 255
  • [28] Single Image Super Resolution via Multi-Attention Fusion Recurrent Network
    Kou, Qiqi
    Cheng, Deqiang
    Zhang, Haoxiang
    Liu, Jingjing
    Guo, Xin
    Jiang, He
    IEEE ACCESS, 2023, 11 : 98653 - 98665
  • [29] Lightweight Multi-Scale Asymmetric Attention Network for Image Super-Resolution
    Zhang, Min
    Wang, Huibin
    Zhang, Zhen
    Chen, Zhe
    Shen, Jie
    MICROMACHINES, 2022, 13 (01)
  • [30] A lightweight multi-scale channel attention network for image super-resolution
    Li, Wenbin
    Li, Juefei
    Li, Jinxin
    Huang, Zhiyong
    Zhou, Dengwen
    NEUROCOMPUTING, 2021, 456 : 327 - 337