Progressive representation recalibration for lightweight super-resolution

被引:7
|
作者
Wen, Ruimian [1 ]
Yang, Zhijing [1 ]
Chen, Tianshui [1 ]
Li, Hao [1 ]
Li, Kai [2 ]
机构
[1] Guangdong Univ Technol, Sch Informat Engn, Guangzhou 510006, Peoples R China
[2] ZEGO, Shenzhen, Peoples R China
关键词
Super-resolution; Lightweight network; Progressive representation recalibration; Channel attention; IMAGE SUPERRESOLUTION; ATTENTION NETWORK;
D O I
10.1016/j.neucom.2022.07.050
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, the lightweight single-image super-resolution (SISR) task has received increasing attention due to the computational complexities and sizes of convolutional neural network (CNN)-based SISR models and the explosive demand in applications on resource-limited edge devices. Current algorithms reduce the number of layers and channels in CNNs to obtain lightweight models for this task. However, these algorithms may reduce the representation ability of the learned features due to information loss, inevi-tably leading to poor performance. In this work, we propose the progressive representation recalibration network (PRRN), a new lightweight SISR network to learn complete and representative feature represen-tations. Specifically, a progressive representation recalibration block (PRRB) is developed to extract useful features from pixel and channel spaces in a two-stage approach. In the first stage, PRRB utilizes pixel and channel information to explore important feature regions. In the second stage, channel attention is fur-ther used to adjust the distribution of important feature channels. In addition, current channel attention mechanisms utilize nonlinear operations that may lead to information loss. In contrast, we design a shal-low channel attention (SCA) mechanism that can learn the importance of each channel in a simpler yet more efficient way. Extensive experiments demonstrate the superiority of the proposed PRRN. (c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页码:240 / 250
页数:11
相关论文
共 50 条
  • [31] Lightweight bidirectional feedback network for image super-resolution
    Wang, Beibei
    Yan, Binyu
    Liu, Changjun
    Hwangbo, Ryul
    Jeon, Gwanggil
    Yang, Xiaomin
    COMPUTERS & ELECTRICAL ENGINEERING, 2022, 102
  • [32] Lightweight image super-resolution network using involution
    Jiu Liang
    Yu Zhang
    Jiangbo Xue
    Yu Zhang
    Yanda Hu
    Machine Vision and Applications, 2022, 33
  • [33] A very lightweight and efficient image super-resolution network?
    Gao, Dandan
    Zhou, Dengwen
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 213
  • [34] Lightweight image super-resolution network using involution
    Liang, Jiu
    Zhang, Yu
    Xue, Jiangbo
    Hu, Yanda
    MACHINE VISION AND APPLICATIONS, 2022, 33 (05)
  • [35] An efficient lightweight network for single image super-resolution*
    Tang, Yinggan
    Zhang, Xiang
    Zhang, Xuguang
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2023, 93
  • [36] Lightweight refined networks for single image super-resolution
    Jiahui Tong
    Qingyu Dou
    Haoran Yang
    Gwanggil Jeon
    Xiaomin Yang
    Multimedia Tools and Applications, 2022, 81 : 3439 - 3458
  • [37] TWO STRATEGIES TOWARD LIGHTWEIGHT IMAGE SUPER-RESOLUTION
    Du, Zongcai
    Liu, Jie
    Tang, Jie
    Wu, Gangshan
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 4478 - 4482
  • [38] Towards Lightweight Super-Resolution With Dual Regression Learning
    Guo, Yong
    Tan, Mingkui
    Deng, Zeshuai
    Wang, Jingdong
    Chen, Qi
    Cao, Jiezhang
    Xu, Yanwu
    Chen, Jian
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (12) : 8365 - 8379
  • [39] A Lightweight Super-resolution Network with Skip-connections
    Wu X.
    Dai P.
    Lu S.
    Luo Z.
    Sun J.
    Yuan K.
    Current Medical Imaging, 2024, 20
  • [40] Lightweight Image Super-Resolution with ConvNeXt Residual Network
    Yong Zhang
    Haomou Bai
    Yaxing Bing
    Xiao Liang
    Neural Processing Letters, 2023, 55 : 9545 - 9561