Spatial relaxation transformer for image super-resolution

被引:1
|
作者
Li, Yinghua [1 ]
Zhang, Ying [1 ]
Zeng, Hao [3 ]
He, Jinglu [1 ]
Guo, Jie [2 ]
机构
[1] Xian Univ Posts & Telecommun, Xian Key Lab Image Proc Technol & Applicat Publ Se, Changan West St, Xian 710121, Shaanxi, Peoples R China
[2] Xidian Univ, State Key Lab Integrated Serv Networks, 2 Southern Tai Bai Rd, Xian 710071, Shaanxi, Peoples R China
[3] Chinese Acad Sci, Inst Software, Beijing, Peoples R China
关键词
Super-resolution; Vision transformer; Feature aggregation; Image enhancement; Swin transformer;
D O I
10.1016/j.jksuci.2024.102150
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Transformer-based approaches have demonstrated remarkable performance in image processing tasks due to their ability to model long-range dependencies. Current mainstream Transformer-based methods typically confine self-attention computation within windows to reduce computational burden. However, this constraint may lead to grid artifacts in the reconstructed images due to insufficient cross-window information exchange, particularly in image super-resolution tasks. To address this issue, we propose the Multi-Scale Texture Complementation Block based on Spatial Relaxation Transformer (MSRT), which leverages features at multiple scales and augments information exchange through cross windows attention computation. In addition, we introduce a loss function based on the prior of texture smoothness transformation, which utilizes the continuity of textures between patches to constrain the generation of more coherent texture information in the reconstructed images. Specifically, we employ learnable compressive sensing technology to extract shallow features from images, preserving image features while reducing feature dimensions and improving computational efficiency. Extensive experiments conducted on multiple benchmark datasets demonstrate that our method outperforms previous state-of-the-art approaches in both qualitative and quantitative evaluations.
引用
收藏
页数:10
相关论文
共 50 条
  • [21] Spatial-Spectral Aggregation Transformer With Diffusion Prior for Hyperspectral Image Super-Resolution
    Zhang, Mingyang
    Wang, Xiangyu
    Wu, Shuang
    Wang, Zhaoyang
    Gong, Maoguo
    Zhou, Yu
    Jiang, Fenlong
    Wu, Yue
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2025, 35 (04) : 3557 - 3572
  • [22] Local spatial information for image super-resolution
    Zareapoor, Masoumeh
    Jain, Deepak Kumar
    Yang, Jie
    COGNITIVE SYSTEMS RESEARCH, 2018, 52 : 49 - 57
  • [23] Dtsr: detail-enhanced transformer for image super-resolution
    Huang, Xiaoqian
    Huang, Detian
    Huang, Qin
    Huang, Caixia
    Chen, Feiyang
    Xu, Zhengjun
    VISUAL COMPUTER, 2024, 40 (11): : 7667 - 7684
  • [24] LCFormer: linear complexity transformer for efficient image super-resolution
    Gao, Xiang
    Wu, Sining
    Zhou, Ying
    Wang, Fan
    Hu, Xiaopeng
    MULTIMEDIA SYSTEMS, 2024, 30 (04)
  • [25] Efficient Swin Transformer for Remote Sensing Image Super-Resolution
    Kang, Xudong
    Duan, Puhong
    Li, Jier
    Li, Shutao
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 6367 - 6379
  • [26] Batch-transformer for scene text image super-resolution
    Sun, Yaqi
    Xie, Xiaolan
    Li, Zhi
    Yang, Kai
    VISUAL COMPUTER, 2024, 40 (10): : 7399 - 7409
  • [27] SVTSR: image super-resolution using scattering vision transformer
    Liang, Jiabao
    Jin, Yutao
    Chen, Xiaoyan
    Huang, Haotian
    Deng, Yue
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [28] SRInpaintor: When Super-Resolution Meets Transformer for Image Inpainting
    Li, Feng
    Li, Anqi
    Qin, Jia
    Bai, Huihui
    Lin, Weisi
    Cong, Runmin
    Zhao, Yao
    IEEE Transactions on Computational Imaging, 2022, 8 : 743 - 758
  • [29] Image Super-Resolution Using a Simple Transformer Without Pretraining
    Huan Liu
    Mingwen Shao
    Chao Wang
    Feilong Cao
    Neural Processing Letters, 2023, 55 : 1479 - 1497
  • [30] Transformer-based image super-resolution and its lightweight
    Zhang, Dongxiao
    Qi, Tangyao
    Gao, Juhao
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (26) : 68625 - 68649