BMANet: Boundary-guided multi-level attention network for polyp segmentation in colonoscopy images

被引:0
|
作者
Wu, Zihuang [1 ]
Chen, Hua [1 ]
Xiong, Xinyu [2 ,3 ]
Wu, Shang [1 ]
Li, Hongwei [1 ]
Zhou, Xinyu [1 ]
机构
[1] Jiangxi Normal Univ, Sch Comp & Informat Engn, Nanchang, Peoples R China
[2] Sun Yat Sen Univ, Sch Comp Sci & Engn, Guangzhou, Peoples R China
[3] Hangzhou Hikvis Digital Technol Co Ltd, EZVIZ, Hangzhou, Peoples R China
关键词
Colonoscopy images; Polyp segmentation; Attention mechanism; Boundary aware feature; Multi scale feature; PLUS PLUS;
D O I
10.1016/j.bspc.2025.107524
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Automated and accurate polyp segmentation is essential for assisting physicians in identifying polyps during colonoscopy, playing a key role in preventing and diagnosing colorectal cancer. Despite significant advances in deep learning-based polyp segmentation methods in recent years, several challenges remain. The shape, size, and texture of polyps can vary considerably, complicating the development of a universal approach. Moreover, polyps are often obscured within the surrounding mucosa, making accurate delineation of their boundaries difficult. To address these challenges, we propose the Boundary-guided Multi-level Attention Network (BMANet) for polyp segmentation. Our method begins with a Cascaded Partial Decoder (CPD) that aggregates highlevel semantic features, generating a coarse global feature map. To refine these features, we introduce a Boundary Aware Module (BAM) that combines low-level and global features to produce distinct boundary features. Furthermore, we present a Boundary-guided Multi-level Attention (BMA) module that integrates encoder features, fine boundary features from BAM, and output features from adjacent higher levels. This integration enhances the network's attention to both polyp regions and boundaries, ensuring comprehensive consideration of global information and boundary details. Through these mechanisms, BMANet effectively identifies polyp regions and yields segmentation results with precise boundaries. Extensive quantitative and qualitative experiments demonstrate that BMANet is highly competitive with existing state-of-the-art (SOTA) methods. Our code is available at https://github.com/WZH0120/BMANet.
引用
收藏
页数:12
相关论文
共 50 条
  • [31] A Multi-level Context Fusion Network for Exudate Segmentation in Retinal Images
    Mo, Juan
    Zhang, Lei
    Feng, Yangqin
    PROCEEDINGS OF 2018 TENTH INTERNATIONAL CONFERENCE ON ADVANCED COMPUTATIONAL INTELLIGENCE (ICACI), 2018, : 243 - 248
  • [32] Boundary-Guided Lightweight Semantic Segmentation With Multi-Scale Semantic Context
    Zhou, Quan
    Wang, Linjie
    Gao, Guangwei
    Kang, Bin
    Ou, Weihua
    Lu, Huimin
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 7887 - 7900
  • [33] Boundary-guided feature integration network with hierarchical transformer for medical image segmentation
    Fan Wang
    Bo Wang
    Multimedia Tools and Applications, 2024, 83 : 8955 - 8969
  • [34] Boundary-guided feature integration network with hierarchical transformer for medical image segmentation
    Wang, Fan
    Wang, Bo
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (03) : 8955 - 8969
  • [35] MLAN: Multi-Level Attention Network
    Qin, Peinuan
    Wang, Qinxuan
    Zhang, Yue
    Wei, Xueyao
    Gao, Meiguo
    IEEE ACCESS, 2022, 10 : 105437 - 105446
  • [36] Two-stage segmentation network with feature aggregation and multi-level attention mechanism for multi-modality heart images
    Song, Yuhui
    Du, Xiuquan
    Zhang, Yanping
    Li, Shuo
    COMPUTERIZED MEDICAL IMAGING AND GRAPHICS, 2022, 97
  • [37] Asymmetric convolutional multi-level attention network for micro-lens segmentation
    Zhong, Shunshun
    Zhou, Haibo
    Yan, Yixiong
    Zhang, Fan
    Duan, Ji'an
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 133
  • [38] NA-segformer: A multi-level transformer model based on neighborhood attention for colonoscopic polyp segmentation
    Liu, Dong
    Lu, Chao
    Sun, Haonan
    Gao, Shouping
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [39] Concept-guided multi-level attention network for image emotion recognition
    Yang, Hansen
    Fan, Yangyu
    Lv, Guoyun
    Liu, Shiya
    Guo, Zhe
    SIGNAL IMAGE AND VIDEO PROCESSING, 2024, 18 (05) : 4313 - 4326
  • [40] Attention guided multi-level feature aggregation network for camouflaged object detection
    Wang, Anzhi
    Ren, Chunhong
    Zhao, Shuang
    Mu, Shibiao
    IMAGE AND VISION COMPUTING, 2024, 144