Mask-Enhanced Segment Anything Model for Tumor Lesion Semantic Segmentation

被引:0
|
作者
Shi, Hairong [1 ,2 ]
Han, Songhao [1 ,2 ]
Huang, Shaofei [3 ]
Liao, Yue [4 ]
Li, Guanbin [5 ]
Kong, Xiangxing [6 ]
Zhu, Hua [6 ]
Wang, Xiaomu [7 ]
Liu, Si [1 ,2 ]
机构
[1] Beihang Univ, 37 Xueyuan Rd, Beijing, Peoples R China
[2] Beihang Univ, Hangzhou Innovat Inst, 18 Chuanghui St, Hangzhou, Peoples R China
[3] Chinese Acad Sci, Inst Informat Engn, Beijing, Peoples R China
[4] Chinese Univ Hong Kong, Hong Kong, Peoples R China
[5] Sun Yat Sen Univ, Guangzhou, Peoples R China
[6] Peking Univ, Peking Univ Canc Hosp & Inst, Dept Nucl Med, Beijing, Peoples R China
[7] Nanjing Univ, Nanjing, Peoples R China
基金
中国国家自然科学基金; 北京市自然科学基金;
关键词
Tumor Lesion Segmentation; Medical Image Segmentation; Segment Anything Model; NETWORK;
D O I
10.1007/978-3-031-72111-3_38
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Tumor lesion segmentation on CT or MRI images plays a critical role in cancer diagnosis and treatment planning. Considering the inherent differences in tumor lesion segmentation data across various medical imaging modalities and equipment, integrating medical knowledge into the Segment Anything Model (SAM) presents promising capability due to its versatility and generalization potential. Recent studies have attempted to enhance SAM with medical expertise by pre-training on large-scale medical segmentation datasets. However, challenges still exist in 3D tumor lesion segmentation owing to tumor complexity and the imbalance in foreground and background regions. Therefore, we introduce Mask-Enhanced SAM (M-SAM), an innovative architecture tailored for 3D tumor lesion segmentation. We propose a novel Mask-Enhanced Adapter (MEA) within M-SAM that enriches the semantic information of medical images with positional data from coarse segmentation masks, facilitating the generation of more precise segmentation masks. Furthermore, an iterative refinement scheme is implemented in M-SAM to refine the segmentation masks progressively, leading to improved performance. Extensive experiments on seven tumor lesion segmentation datasets indicate that our M-SAM not only achieves high segmentation accuracy but also exhibits robust generalization. The code is available at https://github.com/nanase1025/M-SAM.
引用
收藏
页码:403 / 413
页数:11
相关论文
共 50 条
  • [21] GoodSAM: Bridging Domain and Capacity Gaps via Segment Anything Model for Distortion-aware Panoramic Semantic Segmentation
    Zhang, Weiming
    Liu, Yexin
    Zheng, Xu
    Wang, Lin
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 28264 - 28273
  • [22] SEMPNet: enhancing few-shot remote sensing image semantic segmentation through the integration of the segment anything model
    Ao, Wei
    Zheng, Shunyi
    Meng, Yan
    GISCIENCE & REMOTE SENSING, 2024, 61 (01)
  • [23] EFFICIENT CUTTING TOOL WEAR SEGMENTATION BASED ON SEGMENT ANYTHING MODEL
    Li, Zongshuo
    Huo, Ding
    Meurer, Markus
    Bergs, Thomas
    PROCEEDINGS OF ASME 2024 19TH INTERNATIONAL MANUFACTURING SCIENCE AND ENGINEERING CONFERENCE, MSEC2024, VOL 2, 2024,
  • [24] Drilling rock image segmentation and analysis using segment anything model
    Shan, Liqun
    Liu, Yanchang
    Du, Ke
    Paul, Shovon
    Zhang, Xingli
    Hei, Xiali
    ADVANCES IN GEO-ENERGY RESEARCH, 2024, 12 (02): : 89 - 101
  • [25] Evaluation and Improvement of Segment Anything Model for Interactive Histopathology Image Segmentation
    Kim, SeungKyu
    Oh, Hyun-Jic
    Min, Seonghui
    Jeong, Won-Ki
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2023 WORKSHOPS, 2023, 14393 : 245 - 255
  • [26] Data Efficiency of Segment Anything Model for Optic Disc and Cup Segmentation
    Yii, Fabian
    MacGillivray, Tom
    Bernabeu, Miguel O.
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2023 WORKSHOPS, 2023, 14394 : 336 - 346
  • [27] GazeSAM: Interactive Image Segmentation with Eye Gaze and Segment Anything Model
    Wang, Bin
    Aboah, Armstrong
    Zhang, Zheyuan
    Pan, Hongyi
    Bagci, Ulas
    GAZE MEETS MACHINE LEARNING WORKSHOP, 2023, 226 : 254 - 264
  • [28] EyeSAM: Unveiling the Potential of Segment Anything Model in Ophthalmic Image Segmentation
    da Silva, Alan Sousa
    Naik, Gunjan
    Bagga, Pallavi
    Soornro, Taha
    Reis, Ana P. Ribeiro
    Zhang, Gongyu
    Waisberg, Ethan
    Kandakji, Lynn
    Liu, Siyin
    Fu, Dun Jack
    Woof, Wiliam
    Moghul, Ismail
    Balaskas, Konstantinos
    Pontikos, Nikolas
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2024, 65 (07)
  • [29] Tooth segmentation on multimodal images using adapted segment anything model
    Peijuan Wang
    Hanjie Gu
    Yuliang Sun
    Scientific Reports, 15 (1)
  • [30] An efficient fine tuning strategy of segment anything model for polyp segmentation
    Mingyan Wang
    Cun Xu
    Kefeng Fan
    Scientific Reports, 15 (1)