Vision Mamba and xLSTM-UNet for medical image segmentation

被引:0
|
作者
Zhong, Xin [1 ]
Lu, Gehao [1 ]
Li, Hao [1 ]
机构
[1] Yunnan Univ, Sch Informat Sci & Engn, Kunming 650504, Yunnan, Peoples R China
来源
SCIENTIFIC REPORTS | 2025年 / 15卷 / 01期
关键词
Deep Learning; Medical Image Segmentation; SSM; XLSTM; LSTM; FRAMEWORK;
D O I
10.1038/s41598-025-88967-5
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Deep learning-based medical image segmentation methods are generally divided into convolutional neural networks (CNNs) and Transformer-based models. Traditional CNNs are limited by their receptive field, making it challenging to capture long-range dependencies. While Transformers excel at modeling global information, their high computational complexity restricts their practical application in clinical scenarios. To address these limitations, this study introduces VMAXL-UNet, a novel segmentation network that integrates Structured State Space Models (SSM) and lightweight LSTMs (xLSTM). The network incorporates Visual State Space (VSS) and ViL modules in the encoder to efficiently fuse local boundary details with global semantic context. The VSS module leverages SSM to capture long-range dependencies and extract critical features from distant regions. Meanwhile, the ViL module employs a gating mechanism to enhance the integration of local and global features, thereby improving segmentation accuracy and robustness. Experiments on datasets such as ISIC17, ISIC18, CVC-ClinicDB, and Kvasir demonstrate that VMAXL-UNet significantly outperforms traditional CNNs and Transformer-based models in capturing lesion boundaries and their distant correlations. These results highlight the model's superior performance and provide a promising approach for efficient segmentation in complex medical imaging scenarios.
引用
收藏
页数:12
相关论文
共 50 条
  • [41] UCSwin-UNet model for medical image segmentation based on cardiac haemangioma
    Shi, Jian-Ting
    Qu, Gui-Xu
    Li, Zhi-Jun
    IET IMAGE PROCESSING, 2024, 18 (12) : 3302 - 3315
  • [42] DSTUNET: UNET WITH EFFICIENT DENSE SWIN TRANSFORMER PATHWAY FOR MEDICAL IMAGE SEGMENTATION
    Cai, Zhuotong
    Xin, Jingmin
    Shi, Peiwen
    Wu, Jiayi
    Zheng, Nanning
    2022 IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (IEEE ISBI 2022), 2022,
  • [43] Dynamic neighbourhood-enhanced UNet with interwoven fusion for medical image segmentation
    Wan, Liming
    Song, Lin
    Zhou, Ying
    Kang, Chenrui
    Zheng, Shijian
    Chen, Guo
    VISUAL COMPUTER, 2025,
  • [44] MFLUnet: multi-scale fusion lightweight Unet for medical image segmentation
    Cao, Dianlei
    Zhang, Rui
    Zhang, Yunfeng
    BIOMEDICAL OPTICS EXPRESS, 2024, 15 (10): : 5574 - 5591
  • [45] ADS_UNet: A nested UNet for histopathology image segmentation
    Yang, Yilong
    Dasmahapatra, Srinandan
    Mahmoodi, Sasan
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 226
  • [46] LeViT-UNet: Make Faster Encoders with Transformer for Medical Image Segmentation
    Xu, Guoping
    Zhang, Xuan
    He, Xinwei
    Wu, Xinglong
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT VIII, 2024, 14432 : 42 - 53
  • [47] EPolar-UNet: An edge-attending polar UNet for automatic medical image segmentation with small datasets
    Ling, Yating
    Wang, Yuling
    Liu, Qian
    Yu, Jie
    Xu, Lei
    Zhang, Xiaoqian
    Liang, Ping
    Kong, Dexing
    MEDICAL PHYSICS, 2024, 51 (03) : 1702 - 1713
  • [48] MUNet: a novel framework for accurate brain tumor segmentation combining UNet and mamba networks
    Yang, Lijuan
    Dong, Qiumei
    Lin, Da
    Tian, Chunfang
    Lu, Xinliang
    FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, 2025, 19
  • [49] VMKLA-UNet: vision Mamba with KAN linear attention U-Net
    Chenhong Su
    Xuegang Luo
    Shiqing Li
    Li Chen
    Juan Wang
    Scientific Reports, 15 (1)
  • [50] DMSA-UNet: Dual Multi-Scale Attention makes UNet more strong for medical image segmentation
    Li, Xiang
    Fu, Chong
    Wang, Qun
    Zhang, Wenchao
    Sham, Chiu-Wing
    Chen, Junxin
    KNOWLEDGE-BASED SYSTEMS, 2024, 299