Batch-transformer for scene text image super-resolution

被引:1
|
作者
Sun, Yaqi [1 ,3 ]
Xie, Xiaolan [1 ,2 ]
Li, Zhi [1 ]
Yang, Kai [3 ]
机构
[1] Guangxi Normal Univ, Sch Comp Sci & Engn, Guilin, Guangxi, Peoples R China
[2] Guilin Univ Technol, Sch Informat Sci & Engn, Guilin, Guangxi, Peoples R China
[3] Hengyang Normal Univ, Sch Comp Sci & Technol, Hengyang, Peoples R China
来源
VISUAL COMPUTER | 2024年 / 40卷 / 10期
基金
中国国家自然科学基金;
关键词
Computer vision; Super-resolution; Scene text image; Batch-transformer; Loss function; NETWORK;
D O I
10.1007/s00371-024-03598-7
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Recognizing low-resolution text images is challenging as they often lose their detailed information, leading to poor recognition accuracy. Moreover, the traditional methods, based on deep convolutional neural networks (CNNs), are not effective enough for some low-resolution text images with dense characters. In this paper, a novel CNN-based batch-transformer network for scene text image super-resolution (BT-STISR) method is proposed to address this problem. In order to obtain the text information for text reconstruction, a pre-trained text prior module is employed to extract text information. Then a novel two pipeline batch-transformer-based module is proposed, leveraging self-attention and global attention mechanisms to exert the guidance of text prior to the text reconstruction process. Experimental study on a benchmark dataset TextZoom shows that the proposed method BT-STISR achieves the best state-of-the-art performance in terms of structural similarity (SSIM) and peak signal-to-noise ratio (PSNR) metrics compared to some latest methods.
引用
收藏
页码:7399 / 7409
页数:11
相关论文
共 50 条
  • [31] Steformer: Efficient Stereo Image Super-Resolution With Transformer
    Lin, Jianxin
    Yin, Lianying
    Wang, Yijun
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 8396 - 8407
  • [32] Image Super-Resolution Using Dilated Window Transformer
    Park, Soobin
    Choi, Yong Suk
    IEEE ACCESS, 2023, 11 (60028-60039): : 60028 - 60039
  • [33] Efficient mixed transformer for single image super-resolution
    Zheng, Ling
    Zhu, Jinchen
    Shi, Jinpeng
    Weng, Shizhuang
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 133
  • [34] ESSAformer: Efficient Transformer for Hyperspectral Image Super-resolution
    Zhang, Mingjin
    Zhang, Chi
    Zhang, Qiming
    Guo, Jie
    Gao, Xinbo
    Zhang, Jing
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 23016 - 23027
  • [35] Multi-granularity Transformer for Image Super-Resolution
    Zhuge, Yunzhi
    Jia, Xu
    COMPUTER VISION - ACCV 2022, PT III, 2023, 13843 : 138 - 154
  • [36] Learning Texture Transformer Network for Image Super-Resolution
    Yang, Fuzhi
    Yang, Huan
    Fu, Jianlong
    Lu, Hongtao
    Guo, Baining
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 5790 - 5799
  • [37] Enhancing Image Super-Resolution with Dual Compression Transformer
    Yu, Jiaxing
    Chen, Zheng
    Wang, Jingkai
    Kong, Linghe
    Yan, Jiajie
    Gu, Wei
    VISUAL COMPUTER, 2024,
  • [38] Efficient Dual Attention Transformer for Image Super-Resolution
    Park, Soobin
    Jeong, Yuna
    Choi, Yong Suk
    39TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2024, 2024, : 963 - 970
  • [39] Reinforced Swin-Convs Transformer for Simultaneous Underwater Sensing Scene Image Enhancement and Super-resolution
    Ren, Tingdi
    Xu, Haiyong
    Jiang, Gangyi
    Yu, Mei
    Zhang, Xuan
    Wang, Biao
    Luo, Ting
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [40] More and Less: Enhancing Abundance and Refining Redundancy for Text-Prior-Guided Scene Text Image Super-Resolution
    Yang, Wei
    Luo, Yihong
    Ibrayim, Mayire
    Hamdulla, Askar
    DOCUMENT ANALYSIS AND RECOGNITION-ICDAR 2024, PT V, 2024, 14808 : 129 - 146