Sparse Coding Inspired LSTM and Self-Attention Integration for Medical Image Segmentation

被引:0
|
作者
Ji, Zexuan [1 ]
Ye, Shunlong [1 ]
Ma, Xiao [1 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Nanjing 210094, Peoples R China
基金
美国国家科学基金会;
关键词
Sparse coding; contextual module; LSTM; self-attention; medical image segmentation; NETWORK; 2D; CLASSIFICATION;
D O I
10.1109/TIP.2024.3482189
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Accurate and automatic segmentation of medical images plays an essential role in clinical diagnosis and analysis. It has been established that integrating contextual relationships substantially enhances the representational ability of neural networks. Conventionally, Long Short-Term Memory (LSTM) and Self-Attention (SA) mechanisms have been recognized for their proficiency in capturing global dependencies within data. However, these mechanisms have typically been viewed as distinct modules without a direct linkage. This paper presents the integration of LSTM design with SA sparse coding as a key innovation. It uses linear combinations of LSTM states for SA's query, key, and value (QKV) matrices to leverage LSTM's capability for state compression and historical data retention. This approach aims to rectify the shortcomings of conventional sparse coding methods that overlook temporal information, thereby enhancing SA's ability to do sparse coding and capture global dependencies. Building upon this premise, we introduce two innovative modules that weave the SA matrix into the LSTM state design in distinct manners, enabling LSTM to more adeptly model global dependencies and meld seamlessly with SA without accruing extra computational demands. Both modules are separately embedded into the U-shaped convolutional neural network architecture for handling both 2D and 3D medical images. Experimental evaluations on downstream medical image segmentation tasks reveal that our proposed modules not only excel on four extensively utilized datasets across various baselines but also enhance prediction accuracy, even on baselines that have already incorporated contextual modules. Code is available at https://github.com/yeshunlong/SALSTM.
引用
收藏
页码:6098 / 6113
页数:16
相关论文
共 50 条
  • [41] Shape from polarization based on sparse self-attention
    Yu, Zhichao
    Wan, Zhenhua
    Zhao, Kaichun
    Guangxue Jingmi Gongcheng/Optics and Precision Engineering, 2024, 32 (20): : 2987 - 2998
  • [42] Deep Semantic Ranking Hashing Based on Self-Attention for Medical Image Retrieval
    Tang, Yibo
    Chen, Yaxiong
    Xiong, Shengwu
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 4960 - 4966
  • [43] Learning active contour models based on self-attention for breast ultrasound image segmentation
    Zhao, Yu
    Shen, Xiaoyan
    Chen, Jiadong
    Qian, Wei
    Sang, Liang
    Ma, He
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2024, 89
  • [44] Self-attention and mixed loss adversarial networks-based Fundus image segmentation
    Zhao, Wenting
    Lian, Jian
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2022, 63 (07)
  • [45] Prostate MR Image Segmentation With Self-Attention Adversarial Training Based on Wasserstein Distance
    Su, Chengwei
    Huang, Renxiang
    Liu, Chang
    Yin, Tailang
    Du, Bo
    IEEE ACCESS, 2019, 7 : 184276 - 184284
  • [46] An Aerial Target Recognition Algorithm Based on Self-Attention and LSTM
    Liang, Futai
    Chen, Xin
    He, Song
    Song, Zihao
    Lu, Hao
    CMC-COMPUTERS MATERIALS & CONTINUA, 2024, 81 (01): : 1101 - 1121
  • [47] Convolutional LSTM with Self-Attention Mechanism for Extreme Weather Prediction
    Zou, Wenxin
    Ji, Junzhong
    Wang, Yutong
    Wang, Jiayi
    Qian, Yutong
    Liu, Jinduo
    Proceedings - 2023 China Automation Congress, CAC 2023, 2023, : 6782 - 6787
  • [48] Stable self-attention adversarial learning for semi-supervised semantic image segmentation
    Zhang, Jia
    Li, Zhixin
    Zhang, Canlong
    Ma, Huifang
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2021, 78
  • [49] Image Reconstruction by Sparse Coding and Selective Attention
    Li, Zhiqing
    Shi, Zhiping
    Li, Zhixin
    Shi, Zhongzhi
    PROCEEDINGS OF THE 2009 2ND INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING, VOLS 1-9, 2009, : 691 - 695
  • [50] Answer selection model based on LSTM and decay self-attention
    Chen, Qiao-Hong
    Li, Fei-Yu
    Sun, Qi
    Jia, Yu-Bo
    Zhejiang Daxue Xuebao (Gongxue Ban)/Journal of Zhejiang University (Engineering Science), 2022, 56 (12): : 2436 - 2444