Segment anything model for medical image analysis: An experimental study

被引:192
|
作者
Mazurowski, Maciej A. [1 ,2 ,3 ,4 ]
Dong, Haoyu [2 ,5 ]
Gu, Hanxue [2 ]
Yang, Jichen [2 ]
Konz, Nicholas [2 ]
Zhang, Yixin [2 ]
机构
[1] Duke Univ, Dept Radiol, Durham, NC 27708 USA
[2] Duke Univ, Dept Elect & Comp Engn, Durham, NC 27708 USA
[3] Duke Univ, Dept Comp Sci, Durham, NC 27708 USA
[4] Duke Univ, Dept Biostat & Bioinformat, Durham, NC 27708 USA
[5] Duke Univ, Hock Plaza,2424 Erwin Rd, Durham, NC 27704 USA
基金
美国国家卫生研究院;
关键词
Segmentation; Foundation models; Deep learning;
D O I
10.1016/j.media.2023.102918
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Training segmentation models for medical images continues to be challenging due to the limited availability of data annotations. Segment Anything Model (SAM) is a foundation model trained on over 1 billion annotations, predominantly for natural images, that is intended to segment user-defined objects of interest in an interactive manner. While the model performance on natural images is impressive, medical image domains pose their own set of challenges. Here, we perform an extensive evaluation of SAM's ability to segment medical images on a collection of 19 medical imaging datasets from various modalities and anatomies. In our experiments, we generated point and box prompts for SAM using a standard method that simulates interactive segmentation. We report the following findings: (1) SAM's performance based on single prompts highly varies depending on the dataset and the task, from IoU=0.1135 for spine MRI to IoU=0.8650 for hip X-ray. (2) Segmentation performance appears to be better for well-circumscribed objects with prompts with less ambiguity such as the segmentation of organs in computed tomography and poorer in various other scenarios such as the segmentation of brain tumors. (3) SAM performs notably better with box prompts than with point prompts. (4) SAM outperforms similar methods RITM, SimpleClick, and FocalClick in almost all single-point prompt settings. (5) When multiple-point prompts are provided iteratively, SAM's performance generally improves only slightly while other methods' performance improves to the level that surpasses SAM's point-based performance. We also provide several illustrations for SAM's performance on all tested datasets, iterative segmentation, and SAM's behavior given prompt ambiguity. We conclude that SAM shows impressive zero-shot segmentation performance for certain medical imaging datasets, but moderate to poor performance for others. SAM has the potential to make a significant impact in automated medical image segmentation in medical imaging, but appropriate care needs to be applied when using it. Code for evaluation SAM is made publicly available at https://github.com/mazurowski-lab/segment-anything-medical-evaluation.
引用
收藏
页数:11
相关论文
共 50 条
  • [31] Evaluation and Improvement of Segment Anything Model for Interactive Histopathology Image Segmentation
    Kim, SeungKyu
    Oh, Hyun-Jic
    Min, Seonghui
    Jeong, Won-Ki
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2023 WORKSHOPS, 2023, 14393 : 245 - 255
  • [32] Enhancing Agricultural Image Segmentation with an Agricultural Segment Anything Model Adapter
    Li, Yaqin
    Wang, Dandan
    Yuan, Cao
    Li, Hao
    Hu, Jing
    SENSORS, 2023, 23 (18)
  • [33] Refining Boundaries of the Segment Anything Model in Medical Images Using an Active Contour Model
    Nakhaei, Noor
    Zhang, Tengyue
    Terzopoulos, Demetri
    Hsu, William
    COMPUTER-AIDED DIAGNOSIS, MEDICAL IMAGING 2024, 2024, 12927
  • [34] Segment Anything Model for fetal head-pubic symphysis segmentation in intrapartum ultrasound image analysis
    Zhou, Zihao
    Lu, Yaosheng
    Bai, Jieyun
    Campello, Victor M.
    Feng, Fan
    Lekadir, Karim
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 263
  • [35] BiASAM: Bidirectional-Attention Guided Segment Anything Model for Very Few-Shot Medical Image Segmentation
    Zhou, Wei
    Guan, Guilin
    Cui, Wei
    Yi, Yugen
    IEEE SIGNAL PROCESSING LETTERS, 2025, 32 : 246 - 250
  • [36] S-SAM: SVD-Based Fine-Tuning of Segment Anything Model for Medical Image Segmentation
    Paranjape, Jay N.
    Sikder, Shameema
    Vedula, S. Swaroop
    Patel, Vishal M.
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2024, PT XII, 2024, 15012 : 720 - 730
  • [37] Combining Segment Anything Model with Domain-Specific Knowledge for Semi-Supervised Learning in Medical Image Segmentation
    Zhang, Yizhe
    Zhou, Tao
    Wu, Ye
    Gu, Pengfei
    Wang, Shuo
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT XIV, 2025, 15044 : 343 - 357
  • [38] SMALNet: Segment Anything Model Aided Lightweight Network for Infrared Image Segmentation
    Ding, Kun
    Xiang, Shiming
    Pan, Chunhong
    INFRARED PHYSICS & TECHNOLOGY, 2024, 142
  • [39] Enhancing Image Quality in Acoustic Imaging Using the Segment Anything Model (SAM)
    Lang, Y.
    Jiang, Z.
    Sun, L.
    Xiang, L.
    Ren, L.
    MEDICAL PHYSICS, 2024, 51 (10) : 7701 - 7702
  • [40] Generalist Vision Foundation Models for Medical Imaging: A Case Study of Segment Anything Model on Zero-Shot Medical Segmentation
    Shi, Peilun
    Qiu, Jianing
    Abaxi, Sai Mu Dalike
    Wei, Hao
    Lo, Frank P. -W.
    Yuan, Wu
    DIAGNOSTICS, 2023, 13 (11)