Segment anything model for medical image analysis: An experimental study

被引:192
|
作者
Mazurowski, Maciej A. [1 ,2 ,3 ,4 ]
Dong, Haoyu [2 ,5 ]
Gu, Hanxue [2 ]
Yang, Jichen [2 ]
Konz, Nicholas [2 ]
Zhang, Yixin [2 ]
机构
[1] Duke Univ, Dept Radiol, Durham, NC 27708 USA
[2] Duke Univ, Dept Elect & Comp Engn, Durham, NC 27708 USA
[3] Duke Univ, Dept Comp Sci, Durham, NC 27708 USA
[4] Duke Univ, Dept Biostat & Bioinformat, Durham, NC 27708 USA
[5] Duke Univ, Hock Plaza,2424 Erwin Rd, Durham, NC 27704 USA
基金
美国国家卫生研究院;
关键词
Segmentation; Foundation models; Deep learning;
D O I
10.1016/j.media.2023.102918
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Training segmentation models for medical images continues to be challenging due to the limited availability of data annotations. Segment Anything Model (SAM) is a foundation model trained on over 1 billion annotations, predominantly for natural images, that is intended to segment user-defined objects of interest in an interactive manner. While the model performance on natural images is impressive, medical image domains pose their own set of challenges. Here, we perform an extensive evaluation of SAM's ability to segment medical images on a collection of 19 medical imaging datasets from various modalities and anatomies. In our experiments, we generated point and box prompts for SAM using a standard method that simulates interactive segmentation. We report the following findings: (1) SAM's performance based on single prompts highly varies depending on the dataset and the task, from IoU=0.1135 for spine MRI to IoU=0.8650 for hip X-ray. (2) Segmentation performance appears to be better for well-circumscribed objects with prompts with less ambiguity such as the segmentation of organs in computed tomography and poorer in various other scenarios such as the segmentation of brain tumors. (3) SAM performs notably better with box prompts than with point prompts. (4) SAM outperforms similar methods RITM, SimpleClick, and FocalClick in almost all single-point prompt settings. (5) When multiple-point prompts are provided iteratively, SAM's performance generally improves only slightly while other methods' performance improves to the level that surpasses SAM's point-based performance. We also provide several illustrations for SAM's performance on all tested datasets, iterative segmentation, and SAM's behavior given prompt ambiguity. We conclude that SAM shows impressive zero-shot segmentation performance for certain medical imaging datasets, but moderate to poor performance for others. SAM has the potential to make a significant impact in automated medical image segmentation in medical imaging, but appropriate care needs to be applied when using it. Code for evaluation SAM is made publicly available at https://github.com/mazurowski-lab/segment-anything-medical-evaluation.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Application of Segment Anything Model in Medical Image Segmentation
    Wu, Tong
    Hu, Haoji
    Feng, Yang
    Luo, Qiong
    Xu, Dong
    Zheng, Weizeng
    Jin, Neng
    Yang, Chen
    Yao, Jincao
    Zhongguo Jiguang/Chinese Journal of Lasers, 2024, 51 (21):
  • [2] A review of the Segment Anything Model (SAM) for medical image analysis: Accomplishments and perspectives
    Ali, Mudassar
    Wu, Tong
    Hu, Haoji
    Luo, Qiong
    Xu, Dong
    Zheng, Weizeng
    Jin, Neng
    Yang, Chen
    Yao, Jincao
    COMPUTERIZED MEDICAL IMAGING AND GRAPHICS, 2025, 119
  • [3] Medical SAM adapter: Adapting segment anything model for medical image segmentation
    Wu, Junde
    Wang, Ziyue
    Hong, Mingxuan
    Ji, Wei
    Fu, Huazhu
    Xu, Yanwu
    Xu, Min
    Jin, Yueming
    MEDICAL IMAGE ANALYSIS, 2025, 102
  • [4] DeSAM: Decoupled Segment Anything Model for Generalizable Medical Image Segmentation
    Gao, Yifan
    Xia, Wei
    Hu, Dingdu
    Wang, Wenkui
    Gao, Xin
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2024, PT XII, 2024, 15012 : 509 - 519
  • [5] Segment anything model for medical images?
    Huang, Yuhao
    Yang, Xin
    Liu, Lian
    Zhou, Han
    Chang, Ao
    Zhou, Xinrui
    Chen, Rusi
    Yu, Junxuan
    Chen, Jiongquan
    Chen, Chaoyu
    Liu, Sijing
    Chi, Haozhe
    Hu, Xindi
    Yue, Kejuan
    Li, Lei
    Grau, Vicente
    Fan, Deng-Ping
    Dong, Fajin
    Ni, Dong
    MEDICAL IMAGE ANALYSIS, 2024, 92
  • [6] PESAM: Privacy-Enhanced Segment Anything Model for Medical Image Segmentation
    Cai, Jiuyun
    Niu, Ke
    Pan, Yijie
    Tai, Wenjuan
    Han, Jiacheng
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT II, ICIC 2024, 2024, 14863 : 94 - 105
  • [7] Segment anything model for medical image segmentation: Current applications and future directions
    Zhang, Yichi
    Shen, Zhenrong
    Jiao, Rushi
    COMPUTERS IN BIOLOGY AND MEDICINE, 2024, 171
  • [8] Text Knowledge-guided Segment Anything Model for Medical Image Segmentation
    Kim, Young Woon
    Cho, Hyunjun
    Ko, Sung-Jea
    Jung, Seung-Won
    2024 INTERNATIONAL TECHNICAL CONFERENCE ON CIRCUITS/SYSTEMS, COMPUTERS, AND COMMUNICATIONS, ITC-CSCC 2024, 2024,
  • [9] Drilling rock image segmentation and analysis using segment anything model
    Shan, Liqun
    Liu, Yanchang
    Du, Ke
    Paul, Shovon
    Zhang, Xingli
    Hei, Xiali
    ADVANCES IN GEO-ENERGY RESEARCH, 2024, 12 (02): : 89 - 101
  • [10] Matte anything: Interactive natural image matting with segment anything model
    Yao, Jingfeng
    Wang, Xinggang
    Ye, Lang
    Liu, Wenyu
    IMAGE AND VISION COMPUTING, 2024, 147