Segment anything model for medical image analysis: An experimental study

被引:192
|
作者
Mazurowski, Maciej A. [1 ,2 ,3 ,4 ]
Dong, Haoyu [2 ,5 ]
Gu, Hanxue [2 ]
Yang, Jichen [2 ]
Konz, Nicholas [2 ]
Zhang, Yixin [2 ]
机构
[1] Duke Univ, Dept Radiol, Durham, NC 27708 USA
[2] Duke Univ, Dept Elect & Comp Engn, Durham, NC 27708 USA
[3] Duke Univ, Dept Comp Sci, Durham, NC 27708 USA
[4] Duke Univ, Dept Biostat & Bioinformat, Durham, NC 27708 USA
[5] Duke Univ, Hock Plaza,2424 Erwin Rd, Durham, NC 27704 USA
基金
美国国家卫生研究院;
关键词
Segmentation; Foundation models; Deep learning;
D O I
10.1016/j.media.2023.102918
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Training segmentation models for medical images continues to be challenging due to the limited availability of data annotations. Segment Anything Model (SAM) is a foundation model trained on over 1 billion annotations, predominantly for natural images, that is intended to segment user-defined objects of interest in an interactive manner. While the model performance on natural images is impressive, medical image domains pose their own set of challenges. Here, we perform an extensive evaluation of SAM's ability to segment medical images on a collection of 19 medical imaging datasets from various modalities and anatomies. In our experiments, we generated point and box prompts for SAM using a standard method that simulates interactive segmentation. We report the following findings: (1) SAM's performance based on single prompts highly varies depending on the dataset and the task, from IoU=0.1135 for spine MRI to IoU=0.8650 for hip X-ray. (2) Segmentation performance appears to be better for well-circumscribed objects with prompts with less ambiguity such as the segmentation of organs in computed tomography and poorer in various other scenarios such as the segmentation of brain tumors. (3) SAM performs notably better with box prompts than with point prompts. (4) SAM outperforms similar methods RITM, SimpleClick, and FocalClick in almost all single-point prompt settings. (5) When multiple-point prompts are provided iteratively, SAM's performance generally improves only slightly while other methods' performance improves to the level that surpasses SAM's point-based performance. We also provide several illustrations for SAM's performance on all tested datasets, iterative segmentation, and SAM's behavior given prompt ambiguity. We conclude that SAM shows impressive zero-shot segmentation performance for certain medical imaging datasets, but moderate to poor performance for others. SAM has the potential to make a significant impact in automated medical image segmentation in medical imaging, but appropriate care needs to be applied when using it. Code for evaluation SAM is made publicly available at https://github.com/mazurowski-lab/segment-anything-medical-evaluation.
引用
收藏
页数:11
相关论文
共 50 条
  • [21] SnapSeg: Training-Free Few-Shot Medical Image Segmentation with Segment Anything Model
    Yu, Nanxi
    Cai, Zhiyuan
    Huang, Yijin
    Tang, Xiaoying
    TRUSTWORTHY ARTIFICIAL INTELLIGENCE FOR HEALTHCARE, TAI4H 2024, 2024, 14812 : 109 - 122
  • [22] Volumetric medical image segmentation via fully 3D adaptation of Segment Anything Model
    Lin, Haoneng
    Zou, Jing
    Deng, Sen
    Wong, Ka Po
    Aviles-Rivero, Angelica I.
    Fan, Yiting
    Lee, Alex Pui-Wai
    Hu, Xiaowei
    Qin, Jing
    BIOCYBERNETICS AND BIOMEDICAL ENGINEERING, 2025, 45 (01) : 1 - 10
  • [23] LeSAM: Adapt Segment Anything Model for Medical Lesion Segmentation
    Gu, Yunbo
    Wu, Qianyu
    Tang, Hui
    Mai, Xiaoli
    Shu, Huazhong
    Li, Baosheng
    Chen, Yang
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2024, 28 (10) : 6031 - 6041
  • [24] G-SAM: GMM-based segment anything model for medical image classification and segmentation
    Liu, Xiaoxiao
    Zhao, Yan
    Wang, Shigang
    Wei, Jian
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2024, 27 (10): : 14231 - 14245
  • [25] A Novel Universal Image Forensics Localization Model Based on Image Noise and Segment Anything Model
    Su, Yang
    Tan, Shunquan
    Huang, Jiwu
    PROCEEDINGS OF THE 2024 ACM WORKSHOP ON INFORMATION HIDING AND MULTIMEDIA SECURITY, IH&MMSEC 2024, 2024, : 149 - 158
  • [26] Source domain prior-assisted segment anything model for single domain generalization in medical image segmentation
    Dong, Wenhui
    Du, Bo
    Xu, Yongchao
    IMAGE AND VISION COMPUTING, 2024, 150
  • [27] Dr-SAM: U-Shape Structure Segment Anything Model for Generalizable Medical Image Segmentation
    Huo, Xiangzuo
    Tian, Shengwei
    Zhou, Bingming
    Yu, Long
    Li, Aolun
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT VII, ICIC 2024, 2024, 14868 : 197 - 207
  • [28] An empirical study on the robustness of the segment anything model (SAM)
    Wang, Yuqing
    Zhao, Yun
    Petzold, Linda
    PATTERN RECOGNITION, 2024, 155
  • [29] GazeSAM: Interactive Image Segmentation with Eye Gaze and Segment Anything Model
    Wang, Bin
    Aboah, Armstrong
    Zhang, Zheyuan
    Pan, Hongyi
    Bagci, Ulas
    GAZE MEETS MACHINE LEARNING WORKSHOP, 2023, 226 : 254 - 264
  • [30] EyeSAM: Unveiling the Potential of Segment Anything Model in Ophthalmic Image Segmentation
    da Silva, Alan Sousa
    Naik, Gunjan
    Bagga, Pallavi
    Soornro, Taha
    Reis, Ana P. Ribeiro
    Zhang, Gongyu
    Waisberg, Ethan
    Kandakji, Lynn
    Liu, Siyin
    Fu, Dun Jack
    Woof, Wiliam
    Moghul, Ismail
    Balaskas, Konstantinos
    Pontikos, Nikolas
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2024, 65 (07)