共 31 条
Dual Encoder Decoder Shifted Window-Based Transformer Network for Polyp Segmentation with Self-Learning Approach
被引:1
|作者:
P. L.
[1
,7
]
Ullah M.
[3
]
Vats A.
[3
]
Cheikh F.A.
[3
,5
]
Kumar G. S.
[1
,7
]
Nair M.S.
[1
,7
]
机构:
[1] Computer Vision Lab, Department of Computer Science, Cochin University of Science and Technology, Kochi, Kerala
[2] Computer Vision Lab, Department of Computer Science, Cochin University of Science and Technology, Kochin, Kerala
来源:
关键词:
Barlow twins;
Colonoscopy;
Computational modeling;
Computer architecture;
Convolutional neural networks;
convolutional neural networks (CNN);
Decoding;
dilated convolution;
Image segmentation;
polyp segmentation;
Transformers;
D O I:
10.1109/TAI.2024.3366146
中图分类号:
学科分类号:
摘要:
According to WHO reports, cancer is the leading cause of death worldwide. The second most prevalent cause of cancer-related death in both men and women is colorectal cancer. One potential approach for reducing the severity of colon cancer is to utilize automatic segmentation and detection of colorectal polyps in colonoscopy videos. This technology can assist endoscopists in quickly identifying colorectal disease, leading to earlier intervention and better patient Quality of Life (QoL). In this paper, we propose a self-supervised transformer based dual encoder-decoder architecture named P-SwinNet for polyps segmentation in colonoscopy images. The P-SwinNet adapts the dual encoder-decoder type of model to enhance the feature maps by sharing multiscale information from the encoder to the decoder. The proposed model uses multiple dilated convolutions to enlarge the field of view to gather more information without increasing the computational cost and the loss of spatial information. We also leverage a large-scale unlabelled dataset for training our model using the self-learning strategy of Barlow twins. Additionally, to capture the long-range dependencies in the data, we used a shift window-based approach that computes global attention. We extensively evaluate our model against state-of-the-art algorithms. The quantitative results show that the proposed P-SwinNet achieves a mean dice score of 0.87 and a mean Intersection over Union (IoU) of 0.82 on five datasets used in our study. This performance demonstrates a substantial advancement over existing similar works, highlighting the advantage and novelty of our proposed approach in the field of medical image segmentation. IEEE
引用
收藏
页码:1 / 14
页数:13
相关论文