DAE-Former: Dual Attention-Guided Efficient Transformer for Medical Image Segmentation

被引:48
|
作者
Azad, Reza [1 ]
Arimond, Rene [1 ]
Aghdam, Ehsan Khodapanah [2 ]
Kazerouni, Amirhossein [3 ]
Merhof, Dorit [4 ,5 ]
机构
[1] Rhein Westfal TH Aachen, Fac Elect Engn & Informat Technol, Aachen, Germany
[2] Shahid Beheshti Univ, Dept Elect Engn, Tehran, Iran
[3] Iran Univ Sci & Technol, Sch Elect Engn, Tehran, Iran
[4] Univ Regensburg, Inst Image Anal & Comp Vis, Fac Informat & Data Sci, Regensburg, Germany
[5] Fraunhofer Inst Digital Med MEVIS, Bremen, Germany
关键词
Transformer; Segmentation; Deep Learning; Medical;
D O I
10.1007/978-3-031-46005-0_8
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Transformers have recently gained attention in the computer vision domain due to their ability to model long-range dependencies. However, the self-attention mechanism, which is the core part of the Transformer model, usually suffers from quadratic computational complexity with respect to the number of tokens. Many architectures attempt to reduce model complexity by limiting the self-attention mechanism to local regions or by redesigning the tokenization process. In this paper, we propose DAE-Former, a novel method that seeks to provide an alternative perspective by efficiently designing the self-attention mechanism. More specifically, we reformulate the self-attention mechanism to capture both spatial and channel relations across the whole feature dimension while staying computationally efficient. Furthermore, we redesign the skip connection path by including the cross-attention module to ensure the feature reusability and enhance the localization power. Our method outperforms state-of-the-art methods on multi-organ cardiac and skin lesion segmentation datasets, without pre-training weights. The code is publicly available at GitHub.
引用
收藏
页码:83 / 95
页数:13
相关论文
共 50 条
  • [41] STA-Former: enhancing medical image segmentation with Shrinkage Triplet Attention in a hybrid CNN-Transformer model
    Liu, Yuzhao
    Han, Liming
    Yao, Bin
    Li, Qing
    SIGNAL IMAGE AND VIDEO PROCESSING, 2024, 18 (02) : 1901 - 1910
  • [42] SCA-Former: transformer-like network based on stream-cross attention for medical image segmentation
    Gao, Chengrui
    Cheng, Junlong
    Yang, Ziyuan
    Chen, Yingyu
    Zhu, Min
    PHYSICS IN MEDICINE AND BIOLOGY, 2023, 68 (24):
  • [43] STA-Former: enhancing medical image segmentation with Shrinkage Triplet Attention in a hybrid CNN-Transformer model
    Yuzhao Liu
    Liming Han
    Bin Yao
    Qing Li
    Signal, Image and Video Processing, 2024, 18 : 1901 - 1910
  • [44] Feature-guided attention network for medical image segmentation
    Zhou, Hao
    Sun, Chaoyu
    Huang, Hai
    Fan, Mingyu
    Yang, Xu
    Zhou, Linxiao
    MEDICAL PHYSICS, 2023, 50 (08) : 4871 - 4886
  • [45] Efficient Dual Attention Transformer for Image Super-Resolution
    Park, Soobin
    Jeong, Yuna
    Choi, Yong Suk
    39TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2024, 2024, : 963 - 970
  • [46] A dual attention-guided 3D convolution network for automatic segmentation of prostate and tumor
    Li, Yuchun
    Huang, Mengxing
    Zhang, Yu
    Feng, Siling
    Chen, Jing
    Bai, Zhiming
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2023, 85
  • [47] TSE DeepLab: An efficient visual transformer for medical image segmentation
    Yang, Jingdong
    Tu, Jun
    Zhang, Xiaolin
    Yu, Shaoqing
    Zheng, Xianyou
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2023, 80
  • [48] A hybrid enhanced attention transformer network for medical ultrasound image segmentation
    Jiang, Tao
    Xing, Wenyu
    Yu, Ming
    Ta, Dean
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2023, 86
  • [49] Slimmable transformer with hybrid axial-attention for medical image segmentation
    Hu Y.
    Mu N.
    Liu L.
    Zhang L.
    Jiang J.
    Li X.
    Computers in Biology and Medicine, 2024, 173
  • [50] Swin Transformer Assisted Prior Attention Network for Medical Image Segmentation
    Liao, Zhihao
    Fan, Neng
    Xu, Kai
    APPLIED SCIENCES-BASEL, 2022, 12 (09):