Exploring Semantic Prompts in the Segment Anything Model for Domain Adaptation

被引:4
|
作者
Wang, Ziquan [1 ]
Zhang, Yongsheng [1 ]
Zhang, Zhenchao [1 ]
Jiang, Zhipeng [1 ]
Yu, Ying [1 ]
Li, Li [1 ]
Li, Lei [1 ]
机构
[1] PLA Strateg Support Force Informat Engn Univ, Sch Geospatial Informat, Zhengzhou 450001, Peoples R China
基金
中国国家自然科学基金;
关键词
segment anything model (SAM); unsupervised domain adaptation; semantic road scene segmentation;
D O I
10.3390/rs16050758
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Robust segmentation in adverse weather conditions is crucial for autonomous driving. However, these scenes struggle with recognition and make annotations expensive, resulting in poor performance. As a result, the Segment Anything Model (SAM) was recently proposed to finely segment the spatial structure of scenes and to provide powerful prior spatial information, thus showing great promise in resolving these problems. However, SAM cannot be applied directly for different geographic scales and non-semantic outputs. To address these issues, we propose SAM-EDA, which integrates SAM into an unsupervised domain adaptation mean-teacher segmentation framework. In this method, we use a "teacher-assistant" model to provide semantic pseudo-labels, which will fill in the holes in the fine spatial structure given by SAM and generate pseudo-labels close to the ground truth, which then guide the student model for learning. Here, the "teacher-assistant" model helps to distill knowledge. During testing, only the student model is used, thus greatly improving efficiency. We tested SAM-EDA on mainstream segmentation benchmarks in adverse weather conditions and obtained a more-robust segmentation model.
引用
收藏
页数:12
相关论文
共 50 条
  • [31] Benchmarking domain adaptation for semantic segmentation
    Ahmed, Masud
    Hasan, Zahid
    Khan, Naima
    Roy, Nirmalya
    Purushotham, Sanjay
    Gangopadhyay, Aryya
    You, Suya
    UNMANNED SYSTEMS TECHNOLOGY XXIV, 2022, 12124
  • [32] Transferable Semantic Augmentation for Domain Adaptation
    Li, Shuang
    Xie, Mixue
    Gong, Kaixiong
    Liu, Chi Harold
    Wang, Yulin
    Li, Wei
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 11511 - 11520
  • [33] Domain Adaptation in Nuclei Semantic Segmentation
    Li, Dawei
    Shi, Zongxuan
    Zhang, Hao
    Zhang, Renhao
    INTERNATIONAL CONFERENCE ON COMPUTER VISION, APPLICATION, AND DESIGN (CVAD 2021), 2021, 12155
  • [34] Domain adaptation for semantic role labeling in the biomedical domain
    Dahlmeier, Daniel
    Ng, Hwee Tou
    BIOINFORMATICS, 2010, 26 (08) : 1098 - 1104
  • [35] DECOUPLING DOMAIN INVARIANCE AND VARIANCE WITH TAILORED PROMPTS FOR OPEN-SET DOMAIN ADAPTATION
    Zeng, Shihao
    Liu, Xinghong
    Zhou, Yi
    2024 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2024, : 645 - 651
  • [36] Make Segment Anything Model Perfect on Shadow Detection
    Chen, Xiao-Diao
    Wu, Wen
    Yang, Wenya
    Qin, Hongshuai
    Wu, Xiantao
    Mao, Xiaoyang
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61 : 1 - 13
  • [37] SAMP: Adapting Segment Anything Model for Pose Estimation
    Zhu, Zhihang
    Yan, Yunfeng
    Chen, Yi
    Jin, Haoyuan
    Nie, Xuesong
    Qi, Donglian
    Chen, Xi
    2024 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME 2024, 2024,
  • [38] Adapting Segment Anything Model (SAM) for Retinal OCT
    Fazekas, Botond
    Morano, Jose
    Lachinov, Dmitrii
    Aresta, Guilherme
    Bogunovic, Hrvoje
    OPHTHALMIC MEDICAL IMAGE ANALYSIS, OMIA 2023, 2023, 14096 : 92 - 101
  • [39] ASPS: Augmented Segment Anything Model for Polyp Segmentation
    Li, Huiqian
    Zhang, Dingwen
    Yao, Jieru
    Han, Longfei
    Li, Zhongyu
    Han, Junwei
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2024, PT IX, 2024, 15009 : 118 - 128
  • [40] An efficient segment anything model for the segmentation of medical images
    Dong, Guanliang
    Wang, Zhangquan
    Chen, Yourong
    Sun, Yuliang
    Song, Hongbo
    Liu, Liyuan
    Cui, Haidong
    SCIENTIFIC REPORTS, 2024, 14 (01):