Improving attention mechanisms in transformer architecture in image restoration

被引:0
|
作者
Berezhnov, N. I. [1 ]
Sirota, A. A. [1 ]
机构
[1] Voronezh State Univ, Comp Sci Fac, Informat Secur & Proc Technol Dept, Univ Skaya Sq 1, Voronezh 394018, Russia
关键词
image quality improvement; neural networks; transformer models; attention mechanism;
D O I
10.18287/2412-6179-CO-1393
中图分类号
O43 [光学];
学科分类号
070207 ; 0803 ;
摘要
We discuss a problem of improving the quality of images obtained under the influence of various kinds of noise and distortion. In this work we solve this problem using transformer neural network models, because they have recently shown high efficiency in computer vision tasks. An attention mechanism of transformer models is investigated and problems associated with the implementation of the existing approaches based on this mechanism are identified. We propose a novel modification of the attention mechanism with the aim of reducing the number of neural network parameters, conducting a comparison of the proposed transformer model with the known ones. Several datasets with natural and generated distortions are considered. For training neural networks, the Edge Loss function is used to preserve the sharpness of images in the process of noise elimination. The influence of the degree of compression of channel information in the proposed attention mechanism on the image restoration quality is investigated. PSNR, SSIM, and FID metrics are used to assess the quality of the restored images and for a comparison with the existing neural network architectures for each of the datasets. It is confirmed that the architecture proposed by the present authors is, at least, not inferior to the known approaches in improving the image quality, while requiring less computing resources. The quality of the improved images is shown to slightly decrease for the naked human eye with an increase in the channel information compression ratio within reasonable limits.
引用
收藏
页码:726 / 733
页数:9
相关论文
共 50 条
  • [41] Attention Head Interactive Dual Attention Transformer for Hyperspectral Image Classification
    Shi, Cuiping
    Yue, Shuheng
    Wang, Liguo
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62 : 1 - 1
  • [42] Cascaded transformer U-net for image restoration
    Yan, Longbin
    Zhao, Min
    Liu, Shumin
    Shi, Shuaikai
    Chen, Jie
    SIGNAL PROCESSING, 2023, 206
  • [43] Fisheye image rectification and restoration based on Swin Transformer
    Xu, Jian
    Han, Dewei
    Li, Kang
    Li, Junjie
    Ma, Zhaoyuan
    IET IMAGE PROCESSING, 2025, 19 (01)
  • [44] Abnormal Detection Based on Graph Attention Mechanisms and Transformer
    Yan L.
    Zhang K.
    Xu H.
    Han S.-Y.
    Liu S.-Q.
    Shi Y.-L.
    Tien Tzu Hsueh Pao/Acta Electronica Sinica, 2022, 50 (04): : 900 - 908
  • [45] Evolutionary Neural Architecture Search for Image Restoration
    van Wyk, Gerard Jacques
    Bosman, Anna Sergeevna
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [46] Low-light Image Enhancement Using Attention Mechanisms on Edge-Connect Architecture
    Dudak, Muhammet Nuri
    Soylemez, Busra
    Ciftci, Serdar
    32ND IEEE SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE, SIU 2024, 2024,
  • [47] Hourglass attention and progressive hybrid Transformer for image classification
    Peng, Yanfei
    Cui, Yun
    Chen, Kun
    Li, Yongxin
    CHINESE JOURNAL OF LIQUID CRYSTALS AND DISPLAYS, 2024, 39 (09) : 1223 - 1232
  • [48] TripleFormer: improving transformer-based image classification method using multiple self-attention inputs
    Gong, Yu
    Wu, Peng
    Xu, Renjie
    Zhang, Xiaoming
    Wang, Tao
    Li, Xuan
    VISUAL COMPUTER, 2024, 40 (12): : 9039 - 9050
  • [49] DUAL ATTENTION ENHANCED TRANSFORMER FOR IMAGE DEFOCUS DEBLURRING
    He, Yuhang
    Tian, Senmao
    Zhang, Jian
    Zhang, Shunli
    2024 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2024, : 1487 - 1493
  • [50] Sparse self-attention transformer for image inpainting
    Huang, Wenli
    Deng, Ye
    Hui, Siqi
    Wu, Yang
    Zhou, Sanping
    Wang, Jinjun
    PATTERN RECOGNITION, 2024, 145