Bidirectional Self-Training with Multiple Anisotropic Prototypes for Domain Adaptive Semantic Segmentation

被引:13
|
作者
Lu, Yulei [1 ]
Luo, Yawei [1 ]
Zhang, Li [2 ]
Li, Zheyang [3 ]
Yang, Yi [1 ]
Xiao, Jun [1 ]
机构
[1] Zhejiang Univ, Hangzhou, Peoples R China
[2] Zhejiang Insigma Digital Technol Co Ltd, Hangzhou, Peoples R China
[3] Hikvis Res Inst, Hangzhou, Peoples R China
基金
中国国家自然科学基金; 浙江省自然科学基金;
关键词
Semantic Segmentation; Unsupervised Domain Adaptation; Gaussian Mixture Model; Self-training;
D O I
10.1145/3503161.3548225
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
A thriving trend for domain adaptive segmentation endeavors to generate the high-quality pseudo labels for target domain and retrain the segmentor on them. Under this self-training paradigm, some competitive methods have sought to the latent-space information, which establishes the feature centroids (a.k.a prototypes) of the semantic classes and determines the pseudo label candidates by their distances from these centroids. In this paper, we argue that the latent space contains more information to be exploited thus taking one step further to capitalize on it. Firstly, instead of merely using the source-domain prototypes to determine the target pseudo labels as most of the traditional methods do, we bidirectionally produce the target-domain prototypes to degrade those source features which might be too hard or disturbed for the adaptation. Secondly, existing attempts simply model each category as a single and isotropic prototype while ignoring the variance of the feature distribution, which could lead to the confusion of similar categories. To cope with this issue, we propose to represent each category with multiple and anisotropic prototypes via Gaussian Mixture Model, in order to fit the de facto distribution of source domain and estimate the likelihood of target samples based on the probability density. We apply our method on GTA5->Cityscapes and Synthia->Cityscapes tasks and achieve 61.2% and 62.8% respectively in terms of mean IoU, substantially outperforming other competitive self-training methods. Noticeably, in some categories which severely suffer from the categorical confusion such as "truck" and "bus", our method achieves 56.4% and 68.8% respectively, which further demonstrates the effectiveness of our design. The code and model are available at https://github.com/luyvlei/BiSMAPs.
引用
收藏
页码:1405 / 1415
页数:11
相关论文
共 50 条
  • [31] Self-training for Cell Segmentation and Counting
    Luo, J.
    Oore, S.
    Hollensen, P.
    Fine, A.
    Trappenberg, T.
    ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, 11489 : 406 - 412
  • [32] SAM-guided contrast based self-training for source-free cross-domain semantic segmentation
    Ren, Qinghua
    Hou, Ke
    Zhan, Yongzhao
    Wang, Chen
    MULTIMEDIA SYSTEMS, 2024, 30 (04)
  • [33] Domain Adaptive Action Recognition with Integrated Self-training and Feature Selection
    Suzuki, Takafumi
    Kato, Jien
    Wang, Yu
    Mase, Kenji
    2013 SECOND IAPR ASIAN CONFERENCE ON PATTERN RECOGNITION (ACPR 2013), 2013, : 105 - 109
  • [34] Unsupervised Domain Adaptation for Medical Image Segmentation by Disentanglement Learning and Self-Training
    Xie, Qingsong
    Li, Yuexiang
    He, Nanjun
    Ning, Munan
    Ma, Kai
    Wang, Guoxing
    Lian, Yong
    Zheng, Yefeng
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2024, 43 (01) : 4 - 14
  • [35] Active self-training for weakly supervised 3D scene semantic segmentation
    Liu, Gengxin
    van Kaick, Oliver
    Huang, Hui
    Hu, Ruizhen
    COMPUTATIONAL VISUAL MEDIA, 2024, 10 (06) : 1063 - 1078
  • [36] A Three-Stage Self-Training Framework for Semi-Supervised Semantic Segmentation
    Ke, Rihuan
    Aviles-Rivero, Angelica, I
    Pandey, Saurabh
    Reddy, Saikumar
    Schonlieb, Carola-Bibiane
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 1805 - 1815
  • [37] Active self-training for weakly supervised 3D scene semantic segmentation
    Gengxin Liu
    Oliver van Kaick
    Hui Huang
    Ruizhen Hu
    Computational Visual Media, 2024, 10 : 425 - 438
  • [38] A Curriculum-Style Self-Training Approach for Source-Free Semantic Segmentation
    Wang, Yuxi
    Liang, Jian
    Zhang, Zhaoxiang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (12) : 9890 - 9907
  • [39] Bidirectional Learning for Domain Adaptation of Semantic Segmentation
    Li, Yunsheng
    Yuan, Lu
    Vasconcelos, Nuno
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 6929 - 6938
  • [40] Cycle Self-Training for Domain Adaptation
    Liu, Hong
    Wang, Jianmin
    Long, Mingsheng
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34