Improving attention mechanisms in transformer architecture in image restoration

被引:0
|
作者
Berezhnov, N. I. [1 ]
Sirota, A. A. [1 ]
机构
[1] Voronezh State Univ, Comp Sci Fac, Informat Secur & Proc Technol Dept, Univ Skaya Sq 1, Voronezh 394018, Russia
关键词
image quality improvement; neural networks; transformer models; attention mechanism;
D O I
10.18287/2412-6179-CO-1393
中图分类号
O43 [光学];
学科分类号
070207 ; 0803 ;
摘要
We discuss a problem of improving the quality of images obtained under the influence of various kinds of noise and distortion. In this work we solve this problem using transformer neural network models, because they have recently shown high efficiency in computer vision tasks. An attention mechanism of transformer models is investigated and problems associated with the implementation of the existing approaches based on this mechanism are identified. We propose a novel modification of the attention mechanism with the aim of reducing the number of neural network parameters, conducting a comparison of the proposed transformer model with the known ones. Several datasets with natural and generated distortions are considered. For training neural networks, the Edge Loss function is used to preserve the sharpness of images in the process of noise elimination. The influence of the degree of compression of channel information in the proposed attention mechanism on the image restoration quality is investigated. PSNR, SSIM, and FID metrics are used to assess the quality of the restored images and for a comparison with the existing neural network architectures for each of the datasets. It is confirmed that the architecture proposed by the present authors is, at least, not inferior to the known approaches in improving the image quality, while requiring less computing resources. The quality of the improved images is shown to slightly decrease for the naked human eye with an increase in the channel information compression ratio within reasonable limits.
引用
收藏
页码:726 / 733
页数:9
相关论文
共 50 条
  • [1] Region Attention Transformer for Medical Image Restoration
    Yang, Zhiwen
    Chen, Haowei
    Qian, Ziniu
    Zhou, Yang
    Zhang, Hui
    Zhao, Dan
    Wei, Bingzheng
    Xu, Yan
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2024, PT VII, 2024, 15007 : 603 - 613
  • [2] Restoration of Material Pore Structure Image Using Transformer Architecture
    Pan, Jianwei
    Yin, Yi
    Li, Yuanbing
    Li, Shujing
    Wang, Wei
    Cai, Zhen
    Xu, Xin
    2024 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI 2024, 2024, : 1214 - 1219
  • [3] A Transformer Architecture based mutual attention for Image Anomaly Detection
    Zhang, Mengting
    Tian, Xiuxia
    Virtual Reality and Intelligent Hardware, 2023, 5 (01): : 57 - 67
  • [4] Decomformer: Decompose Self-Attention of Transformer for Efficient Image Restoration
    Lee, Eunho
    Hwang, Youngbae
    IEEE ACCESS, 2024, 12 : 38672 - 38684
  • [5] ConTrans: Improving Transformer with Convolutional Attention for Medical Image Segmentation
    Lin, Ailiang
    Xu, Jiayu
    Li, Jinxing
    Lu, Guangming
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT V, 2022, 13435 : 297 - 307
  • [6] Transformer Architecture and Attention Mechanisms in Genome Data Analysis: A Comprehensive Review
    Choi, Sanghyuk Roy
    Lee, Minhyeok
    BIOLOGY-BASEL, 2023, 12 (07):
  • [7] Transformer architecture based on mutual attention for image-anomaly detection
    Mengting ZHANG
    Xiuxia TIAN
    虚拟现实与智能硬件(中英文), 2023, 5 (01) : 57 - 67
  • [8] A hybrid attention network with convolutional neural network and transformer for underwater image restoration
    Jiao Z.
    Wang R.
    Zhang X.
    Fu B.
    Thanh D.N.H.
    PeerJ Computer Science, 2023, 9
  • [9] A hybrid attention network with convolutional neural network and transformer for underwater image restoration
    Jiao, Zhan
    Wang, Ruizi
    Zhang, Xiangyi
    Fu, Bo
    Thanh, Dang Ngoc Hoang
    PEERJ COMPUTER SCIENCE, 2023, 9
  • [10] StochasticWindow Transformer for Image Restoration
    Xiao, Jie
    Fu, Xueyang
    Wu, Feng
    Zha, Zheng-Jun
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,