A Multi-Attention Feature Distillation Neural Network for Lightweight Single Image Super-Resolution

被引:0
|
作者
Zhang, Yongfei [1 ,2 ]
Lin, Xinying [1 ,3 ]
Yang, Hong [1 ]
He, Jie [4 ]
Qing, Linbo [1 ]
He, Xiaohai [1 ]
Li, Yi [5 ]
Chen, Honggang [1 ,6 ]
机构
[1] Sichuan Univ, Coll Elect & Informat Engn, Chengdu 610065, Peoples R China
[2] Guangxi Normal Univ, Guangxi Key Lab Multisource Informat Min & Secur, Guilin 541004, Peoples R China
[3] Tianjin Univ Technol, Minist Educ, Key Lab Comp Vis & Syst, Tianjin 300384, Peoples R China
[4] Wuzhou Univ, Guangxi Key Lab Machine Vis & Intelligent Control, Wuzhou 543002, Peoples R China
[5] DI Sinma Sichuan Machinery Co Ltd, Suining 629201, Peoples R China
[6] Yunnan Univ, Yunnan Key Lab Software Engn, Kunming 650600, Peoples R China
基金
中国国家自然科学基金;
关键词
SPARSE REPRESENTATION; INTERPOLATION;
D O I
10.1155/2024/3255233
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, remarkable performance improvements have been produced by deep convolutional neural networks (CNN) for single image super-resolution (SISR). Nevertheless, a high proportion of CNN-based SISR models are with quite a few network parameters and high computational complexity for deep or wide architectures. How to more fully utilize deep features to make a balance between model complexity and reconstruction performance is one of the main challenges in this field. To address this problem, on the basis of the well-known information multi-distillation model, a multi-attention feature distillation network termed as MAFDN is developed for lightweight and accurate SISR. Specifically, an effective multi-attention feature distillation block (MAFDB) is designed and used as the basic feature extraction unit in MAFDN. With the help of multi-attention layers including pixel attention, spatial attention, and channel attention, MAFDB uses multiple information distillation branches to learn more discriminative and representative features. Furthermore, MAFDB introduces the depthwise over-parameterized convolutional layer (DO-Conv)-based residual block (OPCRB) to enhance its ability without incurring any parameter and computation increase in the inference stage. The results on commonly used datasets demonstrate that our MAFDN outperforms existing representative lightweight SISR models when taking both reconstruction performance and model complexity into consideration. For example, for x4 SR on Set5, MAFDN (597K/33.79G) obtains 0.21 dB/0.0037 and 0.10 dB/0.0015 PSNR/SSIM gains over the attention-based SR model AFAN (692K/50.90G) and the feature distillation-based SR model DDistill-SR (675K/32.83G), respectively.
引用
收藏
页数:14
相关论文
共 50 条
  • [41] FADLSR: A Lightweight Super-Resolution Network Based on Feature Asymmetric Distillation
    Yang, Xin
    Li, Hengrui
    Jian, Hanying
    Li, Tao
    CIRCUITS SYSTEMS AND SIGNAL PROCESSING, 2023, 42 (04) : 2149 - 2168
  • [42] Lightweight image super-resolution for IoT devices using deep residual feature distillation network
    Mardieva, Sevara
    Ahmad, Shabir
    Umirzakova, Sabina
    Rasool, M. J. Aashik
    Whangbo, Taeg Keun
    KNOWLEDGE-BASED SYSTEMS, 2024, 285
  • [43] Single infrared image super-resolution based on lightweight multi-path feature fusion network
    Mo, Fei
    Wu, Heng
    Qu, Shuo
    Luo, Shaojuan
    Cheng, Lianglun
    IET IMAGE PROCESSING, 2022, 16 (07) : 1880 - 1896
  • [44] Lightweight single image super-resolution based on multi-path progressive feature fusion and attention mechanism
    Shanshan Li
    Dengwen Zhou
    Yukai Liu
    Dandan Gao
    Wanjun Wang
    International Journal of Machine Learning and Cybernetics, 2023, 14 : 3517 - 3528
  • [45] An efficient lightweight network for single image super-resolution*
    Tang, Yinggan
    Zhang, Xiang
    Zhang, Xuguang
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2023, 93
  • [46] Lightweight image super-resolution with multiscale residual attention network
    Xiao, Cunjun
    Dong, Hui
    Li, Haibin
    Li, Yaqian
    Zhang, Wenming
    JOURNAL OF ELECTRONIC IMAGING, 2022, 31 (04)
  • [47] Lightweight image super-resolution with sliding Proxy Attention Network
    Hu, Zhenyu
    Sun, Wanjie
    Chen, Zhenzhong
    SIGNAL PROCESSING, 2025, 227
  • [48] Lightweight single image super-resolution based on multi-path progressive feature fusion and attention mechanism
    Li, Shanshan
    Zhou, Dengwen
    Liu, Yukai
    Gao, Dandan
    Wang, Wanjun
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2023, 14 (10) : 3517 - 3528
  • [49] Lightweight adaptive enhanced attention network for image super-resolution
    Li Wang
    Lizhong Xu
    Jianqiang Shi
    Jie Shen
    Fengcheng Huang
    Multimedia Tools and Applications, 2022, 81 : 6513 - 6537
  • [50] Lightweight Attention-Guided Network for Image Super-Resolution
    Ding, Zixuan
    Juan, Zhang
    Xiang, Li
    Wang, Xinyu
    LASER & OPTOELECTRONICS PROGRESS, 2023, 60 (14)