Segment anything model for medical image analysis: An experimental study

被引:192
|
作者
Mazurowski, Maciej A. [1 ,2 ,3 ,4 ]
Dong, Haoyu [2 ,5 ]
Gu, Hanxue [2 ]
Yang, Jichen [2 ]
Konz, Nicholas [2 ]
Zhang, Yixin [2 ]
机构
[1] Duke Univ, Dept Radiol, Durham, NC 27708 USA
[2] Duke Univ, Dept Elect & Comp Engn, Durham, NC 27708 USA
[3] Duke Univ, Dept Comp Sci, Durham, NC 27708 USA
[4] Duke Univ, Dept Biostat & Bioinformat, Durham, NC 27708 USA
[5] Duke Univ, Hock Plaza,2424 Erwin Rd, Durham, NC 27704 USA
基金
美国国家卫生研究院;
关键词
Segmentation; Foundation models; Deep learning;
D O I
10.1016/j.media.2023.102918
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Training segmentation models for medical images continues to be challenging due to the limited availability of data annotations. Segment Anything Model (SAM) is a foundation model trained on over 1 billion annotations, predominantly for natural images, that is intended to segment user-defined objects of interest in an interactive manner. While the model performance on natural images is impressive, medical image domains pose their own set of challenges. Here, we perform an extensive evaluation of SAM's ability to segment medical images on a collection of 19 medical imaging datasets from various modalities and anatomies. In our experiments, we generated point and box prompts for SAM using a standard method that simulates interactive segmentation. We report the following findings: (1) SAM's performance based on single prompts highly varies depending on the dataset and the task, from IoU=0.1135 for spine MRI to IoU=0.8650 for hip X-ray. (2) Segmentation performance appears to be better for well-circumscribed objects with prompts with less ambiguity such as the segmentation of organs in computed tomography and poorer in various other scenarios such as the segmentation of brain tumors. (3) SAM performs notably better with box prompts than with point prompts. (4) SAM outperforms similar methods RITM, SimpleClick, and FocalClick in almost all single-point prompt settings. (5) When multiple-point prompts are provided iteratively, SAM's performance generally improves only slightly while other methods' performance improves to the level that surpasses SAM's point-based performance. We also provide several illustrations for SAM's performance on all tested datasets, iterative segmentation, and SAM's behavior given prompt ambiguity. We conclude that SAM shows impressive zero-shot segmentation performance for certain medical imaging datasets, but moderate to poor performance for others. SAM has the potential to make a significant impact in automated medical image segmentation in medical imaging, but appropriate care needs to be applied when using it. Code for evaluation SAM is made publicly available at https://github.com/mazurowski-lab/segment-anything-medical-evaluation.
引用
收藏
页数:11
相关论文
共 50 条
  • [41] Enhancing Nnunet Performance with a Plug-and-Play Segment Anything Model for Few-Shot Medical Image Segmentation (nnSAM)
    Li, Y.
    Jing, B.
    Wang, J.
    Zhang, Y.
    MEDICAL PHYSICS, 2024, 51 (10) : 7694 - 7694
  • [42] Trans-SAM: Transfer Segment Anything Model to medical image segmentation with Parameter-Efficient Fine-Tuning
    Wu, Yanlin
    Wang, Zhihong
    Yang, Xiongfeng
    Kang, Hong
    He, Along
    Li, Tao
    KNOWLEDGE-BASED SYSTEMS, 2025, 310
  • [43] Segment Anything Model Can Not Segment Anything: Assessing AI Foundation Model's Generalizability in Permafrost Mapping
    Li, Wenwen
    Hsu, Chia-Yu
    Wang, Sizhe
    Yang, Yezhou
    Lee, Hyunho
    Liljedahl, Anna
    Witharana, Chandi
    Yang, Yili
    Rogers, Brendan M.
    Arundel, Samantha T.
    Jones, Matthew B.
    McHenry, Kenton
    Solis, Patricia
    REMOTE SENSING, 2024, 16 (05)
  • [44] Optimizing Scanning Acoustic Tomography Image Segmentation With Segment Anything Model for Semiconductor Devices
    Vu, Thi Thu Ha
    Vo, Tan Hung
    Nguyen, Trong Nhan
    Choi, Jaeyeop
    Mondal, Sudip
    Oh, Junghwan
    IEEE TRANSACTIONS ON SEMICONDUCTOR MANUFACTURING, 2024, 37 (04) : 591 - 601
  • [45] Segment Anything by Meta as a foundation model for image segmentation: a new era for histopathological images
    Chauveau, Bertrand
    Merville, Pierre
    PATHOLOGY, 2023, 55 (07) : 1017 - 1020
  • [46] IAMSAM: Image-based analysis of molecular signatures using the Segment- anything model - Integrative analysis tool for tumor microenvironment
    Lee, Dongjoo
    Park, Jeongbin
    Cook, Seungho
    Yoo, Seongjin
    Lee, Daeseung
    Choi, Hongyoon
    CANCER RESEARCH, 2024, 84 (06)
  • [47] Make Segment Anything Model Perfect on Shadow Detection
    Chen, Xiao-Diao
    Wu, Wen
    Yang, Wenya
    Qin, Hongshuai
    Wu, Xiantao
    Mao, Xiaoyang
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61 : 1 - 13
  • [48] SAMP: Adapting Segment Anything Model for Pose Estimation
    Zhu, Zhihang
    Yan, Yunfeng
    Chen, Yi
    Jin, Haoyuan
    Nie, Xuesong
    Qi, Donglian
    Chen, Xi
    2024 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME 2024, 2024,
  • [49] Adapting Segment Anything Model (SAM) for Retinal OCT
    Fazekas, Botond
    Morano, Jose
    Lachinov, Dmitrii
    Aresta, Guilherme
    Bogunovic, Hrvoje
    OPHTHALMIC MEDICAL IMAGE ANALYSIS, OMIA 2023, 2023, 14096 : 92 - 101
  • [50] ASPS: Augmented Segment Anything Model for Polyp Segmentation
    Li, Huiqian
    Zhang, Dingwen
    Yao, Jieru
    Han, Longfei
    Li, Zhongyu
    Han, Junwei
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2024, PT IX, 2024, 15009 : 118 - 128