SnapSeg: Training-Free Few-Shot Medical Image Segmentation with Segment Anything Model

被引:0
|
作者
Yu, Nanxi [1 ,2 ]
Cai, Zhiyuan [1 ,3 ]
Huang, Yijin [1 ,4 ]
Tang, Xiaoying [1 ,3 ]
机构
[1] Southern Univ Sci & Technol, Shenzhen, Peoples R China
[2] Hong Kong Polytech Univ, Hong Kong, Peoples R China
[3] Jiaxing Res Inst, Jiaxing, Peoples R China
[4] Univ British Columbia, Vancouver, BC, Canada
基金
中国国家自然科学基金;
关键词
Few-shot Learning; Medical Image Segmentation; Segment Anything Model;
D O I
10.1007/978-3-031-67751-9_9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the pursuit of advancing medical diagnosis, automatic segmentation of medical images is crucial, particularly in extending medical expertise to under-resourced regions. However, collecting and annotating medical data for deep learning frameworks are both time-consuming and expensive. Few-shot learning, which leverages limited labeled data to learn new tasks, has been widely applied to medical image segmentation, offering significant advancements. Nonetheless, these methods often rely on extensive unlabeled data to acquire prior medical knowledge. We introduce SnapSeg, a novel few-shot segmentation framework that stands out by requiring only a minimal set of labeled images to directly tackle new segmentation tasks, thus bypassing the need for a traditional training phase. Utilizing either a single or a few labeled examples, SnapSeg extracts multi-level features from the Segment Anything Model (SAM)'s image encoder and incorporates a relative anchor algorithm for precise spatial assessment. Our method demonstrates state-of-the-art performance on the widely-used Abd-CT dataset in medical image segmentation.
引用
收藏
页码:109 / 122
页数:14
相关论文
共 50 条
  • [1] Segment anything model for few-shot medical image segmentation with domain tuning
    Shi, Weili
    Zhang, Penglong
    Li, Yuqin
    Jiang, Zhengang
    COMPLEX & INTELLIGENT SYSTEMS, 2025, 11 (01)
  • [2] BiASAM: Bidirectional-Attention Guided Segment Anything Model for Very Few-Shot Medical Image Segmentation
    Zhou, Wei
    Guan, Guilin
    Cui, Wei
    Yi, Yugen
    IEEE SIGNAL PROCESSING LETTERS, 2025, 32 : 246 - 250
  • [3] Prototypical Metric Segment Anything Model for Data-Free Few-Shot Semantic Segmentation
    Jiang, Zhiyu
    Yuan, Ye
    Yuan, Yuan
    IEEE SIGNAL PROCESSING LETTERS, 2024, 31 : 2800 - 2804
  • [4] Enhancing Nnunet Performance with a Plug-and-Play Segment Anything Model for Few-Shot Medical Image Segmentation (nnSAM)
    Li, Y.
    Jing, B.
    Wang, J.
    Zhang, Y.
    MEDICAL PHYSICS, 2024, 51 (10) : 7694 - 7694
  • [5] Learning few-shot semantic segmentation with error-filtered segment anything model
    Feng, Chen-Bin
    Lai, Qi
    Liu, Kangdao
    Su, Houcheng
    Chen, Hao
    Luo, Kaixi
    Vong, Chi-Man
    VISUAL COMPUTER, 2025,
  • [6] SEMPNet: enhancing few-shot remote sensing image semantic segmentation through the integration of the segment anything model
    Ao, Wei
    Zheng, Shunyi
    Meng, Yan
    GISCIENCE & REMOTE SENSING, 2024, 61 (01)
  • [7] AGSAM: Agent-Guided Segment Anything Model for Automatic Segmentation in Few-Shot Scenarios
    Zhou, Hao
    He, Yao
    Cui, Xiaoxiao
    Xie, Zhi
    BIOENGINEERING-BASEL, 2024, 11 (05):
  • [8] Attentional adversarial training for few-shot medical image segmentation without annotations
    Awudong, Buhailiqiemu
    Li, Qi
    Liang, Zili
    Tian, Lin
    Yan, Jingwen
    PLOS ONE, 2024, 19 (05):
  • [9] Learning what and where to segment: A new perspective on medical image few-shot segmentation
    Feng, Yong
    Wang, Yonghuai
    Li, Honghe
    Qu, Mingjun
    Yang, Jinzhu
    MEDICAL IMAGE ANALYSIS, 2023, 87
  • [10] SAM-RSP: A new few-shot segmentation method based on segment anything model and rough segmentation prompts
    Li, Jiaguang
    Wei, Ying
    Zhang, Wei
    Shi, Zhenrui
    IMAGE AND VISION COMPUTING, 2024, 150