MS-Former: Multi-Scale Self-Guided Transformer for Medical Image Segmentation

被引:0
|
作者
Karimijafarbigloo, Sanaz [1 ,2 ]
Azad, Reza [2 ]
Kazerouni, Amirhossein [3 ]
Merhof, Dorit [1 ,4 ]
机构
[1] Univ Regensburg, Fac Informat & Data Sci, Regensburg, Germany
[2] Rhein Westfal TH Aachen, Inst Imaging & Comp Vis, Aachen, Germany
[3] Iran Univ Sci & Technol, Sch Elect Engn, Tehran, Iran
[4] Fraunhofer Inst Digital Med MEVIS, Bremen, Germany
关键词
Transformer; Inter-scale; Intra-scale; Segmentation; Medical Image;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-scale representations have proven to be a powerful tool since they can take into account both the fine-grained details of objects in an image as well as the broader context. Inspired by this, we propose a novel dual-branch transformer network that operates on two different scales to encode global contextual dependencies while preserving local information. To learn in a self-supervised fashion, our approach considers the semantic dependency that exists between different scales to generate a supervisory signal for inter-scale consistency and also imposes a spatial stability loss within the scale for self-supervised content clustering. While intra-scale and inter-scale consistency losses aim to increase features similarly within the cluster, we propose to include a cross-entropy loss function on top of the clustering score map to effectively model each cluster distribution and increase the decision boundary between clusters. Iteratively our algorithm learns to assign each pixel to a semantically related cluster to produce the segmentation map. Extensive experiments on skin lesion and lung segmentation datasets show the superiority of our method compared to the state-of-the-art (SOTA) approaches. The implementation code is publicly available at GitHub.
引用
收藏
页码:680 / 694
页数:15
相关论文
共 50 条
  • [21] DMAGNet: Dual-path multi-scale attention guided network for medical image segmentation
    Ji, Qiulang
    Wang, Jihong
    Ding, Caifu
    Wang, Yuhang
    Zhou, Wen
    Liu, Zijie
    Yang, Chen
    IET IMAGE PROCESSING, 2023, 17 (13) : 3631 - 3644
  • [22] MSINET: Multi-scale Interconnection Network for Medical Image Segmentation
    Xu, Zhengke
    Shan, Xinxin
    Wen, Ying
    ADVANCES IN COMPUTER GRAPHICS, CGI 2023, PT IV, 2024, 14498 : 274 - 286
  • [23] Deep multi-scale attentional features for medical image segmentation
    Poudel, Sahadev
    Lee, Sang-Woong
    APPLIED SOFT COMPUTING, 2021, 109
  • [24] Unified Multi-scale Feature Abstraction for Medical Image Segmentation
    Fang, Xi
    Du, Bo
    Xu, Sheng
    Wood, Bradford J.
    Yan, Pingkun
    MEDICAL IMAGING 2020: IMAGE PROCESSING, 2021, 11313
  • [25] Selective and multi-scale fusion Mamba for medical image segmentation
    Li, Guangju
    Huang, Qinghua
    Wang, Wei
    Liu, Longzhong
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 261
  • [26] Research on Medical Image Segmentation based on Multi-scale CLT
    Zhang Cai-qing
    Liu Hui
    FIFTH INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS AND KNOWLEDGE DISCOVERY, VOL 2, PROCEEDINGS, 2008, : 192 - +
  • [27] Medical Image Segmentation Based on Multi-Scale Convolution Modulation
    Zhou, Xin-Min
    Xiong, Zhi-Mou
    Shi, Chang-Fa
    Yang, Jian
    Tien Tzu Hsueh Pao/Acta Electronica Sinica, 2024, 52 (09): : 3159 - 3171
  • [28] DAE-Former: Dual Attention-Guided Efficient Transformer for Medical Image Segmentation
    Azad, Reza
    Arimond, Rene
    Aghdam, Ehsan Khodapanah
    Kazerouni, Amirhossein
    Merhof, Dorit
    PREDICTIVE INTELLIGENCE IN MEDICINE, PRIME 2023, 2023, 14277 : 83 - 95
  • [29] MS-UNet: Swin Transformer U-Net with Multi-scale Nested Decoder for Medical Image Segmentation with Small Training Data
    Chen, Haoyuan
    Han, Yufei
    Li, Yanyi
    Xu, Pin
    Li, Kuan
    Yin, Jianping
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XIII, 2024, 14437 : 472 - 483
  • [30] UC-former: A multi-scale image deraining network using enhanced transformer
    Zhou, Weina
    Ye, Linhui
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2024, 248