Bidirectional Self-Training with Multiple Anisotropic Prototypes for Domain Adaptive Semantic Segmentation

被引:13
|
作者
Lu, Yulei [1 ]
Luo, Yawei [1 ]
Zhang, Li [2 ]
Li, Zheyang [3 ]
Yang, Yi [1 ]
Xiao, Jun [1 ]
机构
[1] Zhejiang Univ, Hangzhou, Peoples R China
[2] Zhejiang Insigma Digital Technol Co Ltd, Hangzhou, Peoples R China
[3] Hikvis Res Inst, Hangzhou, Peoples R China
基金
中国国家自然科学基金; 浙江省自然科学基金;
关键词
Semantic Segmentation; Unsupervised Domain Adaptation; Gaussian Mixture Model; Self-training;
D O I
10.1145/3503161.3548225
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
A thriving trend for domain adaptive segmentation endeavors to generate the high-quality pseudo labels for target domain and retrain the segmentor on them. Under this self-training paradigm, some competitive methods have sought to the latent-space information, which establishes the feature centroids (a.k.a prototypes) of the semantic classes and determines the pseudo label candidates by their distances from these centroids. In this paper, we argue that the latent space contains more information to be exploited thus taking one step further to capitalize on it. Firstly, instead of merely using the source-domain prototypes to determine the target pseudo labels as most of the traditional methods do, we bidirectionally produce the target-domain prototypes to degrade those source features which might be too hard or disturbed for the adaptation. Secondly, existing attempts simply model each category as a single and isotropic prototype while ignoring the variance of the feature distribution, which could lead to the confusion of similar categories. To cope with this issue, we propose to represent each category with multiple and anisotropic prototypes via Gaussian Mixture Model, in order to fit the de facto distribution of source domain and estimate the likelihood of target samples based on the probability density. We apply our method on GTA5->Cityscapes and Synthia->Cityscapes tasks and achieve 61.2% and 62.8% respectively in terms of mean IoU, substantially outperforming other competitive self-training methods. Noticeably, in some categories which severely suffer from the categorical confusion such as "truck" and "bus", our method achieves 56.4% and 68.8% respectively, which further demonstrates the effectiveness of our design. The code and model are available at https://github.com/luyvlei/BiSMAPs.
引用
收藏
页码:1405 / 1415
页数:11
相关论文
共 50 条
  • [21] Domain Adaptive LiDAR Point Cloud Segmentation via Density-Aware Self-Training
    Xiao, Aoran
    Huang, Jiaxing
    Liu, Kangcheng
    Guan, Dayan
    Zhang, Xiaoqin
    Lu, Shijian
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (10) : 13627 - 13639
  • [22] Source-free domain adaptive segmentation with class-balanced complementary self-training
    Huang, Yongsong
    Xie, Wanqing
    Li, Mingzhen
    Xiao, Ethan
    You, Jane
    Liu, Xiaofeng
    ARTIFICIAL INTELLIGENCE IN MEDICINE, 2023, 146
  • [23] Self-training guided disentangled adaptation for cross-domain remote sensing image semantic segmentation
    Zhao, Qi
    Lyu, Shuchang
    Zhao, Hongbo
    Liu, Binghao
    Chen, Lijiang
    Cheng, Guangliang
    INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION, 2024, 127
  • [24] Unsupervised global-local domain adaptation with self-training for remote sensing image semantic segmentation
    Zhang, Junbo
    Li, Zhiyong
    Wang, Mantao
    Li, Kunhong
    INTERNATIONAL JOURNAL OF REMOTE SENSING, 2025, 46 (05) : 2254 - 2284
  • [25] Semisupervised Semantic Segmentation of Remote Sensing Images With Consistency Self-Training
    Li, Jiahao
    Sun, Bin
    Li, Shutao
    Kang, Xudong
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [26] A Closer Look at Self-training for Zero-Label Semantic Segmentation
    Pastore, Giuseppe
    Cermelli, Fabio
    Xian, Yongqin
    Mancini, Massimiliano
    Akata, Zeynep
    Caputo, Barbara
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 2687 - 2696
  • [27] Learning from Future: A Novel Self-Training Framework for Semantic Segmentation
    Du, Ye
    Shen, Yujun
    Wang, Haochen
    Fei, Jingjing
    Li, Wei
    Wu, Liwei
    Zhao, Rui
    Fu, Zehua
    Liu, Qingjie
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [28] Domain-Invariant Prototypes for Semantic Segmentation
    Yang, Zhengeng
    Yu, Hongshan
    Sun, Wei
    Cheng, Li
    Mian, Ajmal
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (08) : 7614 - 7627
  • [29] Self-training and Multi-level Adversarial Network for Domain Adaptive Remote Sensing Image Segmentation
    Zheng, Yilin
    He, Lingmin
    Wu, Xiangping
    Pan, Chen
    NEURAL PROCESSING LETTERS, 2023, 55 (08) : 10613 - 10638
  • [30] Self-training and Multi-level Adversarial Network for Domain Adaptive Remote Sensing Image Segmentation
    Yilin Zheng
    Lingmin He
    Xiangping Wu
    Chen Pan
    Neural Processing Letters, 2023, 55 : 10613 - 10638