Improving Semantic Segmentation via Efficient Self-Training

被引:38
|
作者
Zhu, Yi [1 ]
Zhang, Zhongyue [2 ]
Wu, Chongruo [3 ]
Zhang, Zhi [1 ]
He, Tong [1 ]
Zhang, Hang [4 ]
Manmatha, R. [1 ]
Li, Mu [1 ]
Smola, Alexander [1 ]
机构
[1] Amazon Web Serv, Santa Clara, CA 95054 USA
[2] Snapchat, Sunnyvale, CA 94085 USA
[3] Univ Calif Davis, Davis, CA 95616 USA
[4] Facebook, Menlo Pk, CA 94025 USA
基金
澳大利亚研究理事会;
关键词
Training; Semantics; Computational modeling; Image segmentation; Data models; Schedules; Predictive models; Semantic segmentation; semi-supervised learning; self-training; fast training schedule; cross-domain generalization;
D O I
10.1109/TPAMI.2021.3138337
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Starting from the seminal work of Fully Convolutional Networks (FCN), there has been significant progress on semantic segmentation. However, deep learning models often require large amounts of pixelwise annotations to train accurate and robust models. Given the prohibitively expensive annotation cost of segmentation masks, we introduce a self-training framework in this paper to leverage pseudo labels generated from unlabeled data. In order to handle the data imbalance problem of semantic segmentation, we propose a centroid sampling strategy to uniformly select training samples from every class within each epoch. We also introduce a fast training schedule to alleviate the computational burden. This enables us to explore the usage of large amounts of pseudo labels. Our Centroid Sampling based Self-Training framework (CSST) achieves state-of-the-art results on Cityscapes and CamVid datasets. On PASCAL VOC 2012 test set, our models trained with the original train set even outperform the same models trained on the much bigger augmented train set. This indicates the effectiveness of CSST when there are fewer annotations. We also demonstrate promising few-shot generalization capability from Cityscapes to BDD100K and from Cityscapes to Mapillary datasets.
引用
收藏
页码:1589 / 1602
页数:14
相关论文
共 50 条
  • [1] Weakly-Supervised Semantic Segmentation via Self-training
    Cheng, Hao
    Gu, Chaochen
    Wu, Kaijie
    2020 4TH INTERNATIONAL CONFERENCE ON CONTROL ENGINEERING AND ARTIFICIAL INTELLIGENCE (CCEAI 2020), 2020, 1487
  • [2] Improving Skin Lesion Segmentation with Self-Training
    Dzieniszewska, Aleksandra
    Garbat, Piotr
    Piramidowicz, Ryszard
    CANCERS, 2024, 16 (06)
  • [3] Self-Training for Class-Incremental Semantic Segmentation
    Yu, Lu
    Liu, Xialei
    van de Weijer, Joost
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (11) : 9116 - 9127
  • [4] Adversarial Self-Training with Domain Mask for Semantic Segmentation
    Hsin, Hsien-Kai
    Chiu, Hsiao-Chien
    Lin, Chun-Chen
    Chen, Chih-Wei
    Tsung, Pei-Kuei
    2019 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC), 2019, : 3689 - 3695
  • [5] Unsupervised Domain Adaptation for Semantic Segmentation via Class-Balanced Self-training
    Zou, Yang
    Yu, Zhiding
    Kumar, B. V. K. Vijaya
    Wang, Jinsong
    COMPUTER VISION - ECCV 2018, PT III, 2018, 11207 : 297 - 313
  • [6] Combining Semantic Self-Supervision and Self-Training for Domain Adaptation in Semantic Segmentation
    Niemeijer, Joshua
    Schaefer, Joerg P.
    2021 IEEE INTELLIGENT VEHICLES SYMPOSIUM WORKSHOPS (IV WORKSHOPS), 2021, : 364 - 371
  • [7] Semisupervised Semantic Segmentation of Remote Sensing Images With Consistency Self-Training
    Li, Jiahao
    Sun, Bin
    Li, Shutao
    Kang, Xudong
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [8] A Closer Look at Self-training for Zero-Label Semantic Segmentation
    Pastore, Giuseppe
    Cermelli, Fabio
    Xian, Yongqin
    Mancini, Massimiliano
    Akata, Zeynep
    Caputo, Barbara
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 2687 - 2696
  • [9] Contrastive Learning and Self-Training for Unsupervised Domain Adaptation in Semantic Segmentation
    Marsden, Robert A.
    Bartler, Alexander
    Doebler, Mario
    Yang, Bin
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [10] Learning from Future: A Novel Self-Training Framework for Semantic Segmentation
    Du, Ye
    Shen, Yujun
    Wang, Haochen
    Fei, Jingjing
    Li, Wei
    Wu, Liwei
    Zhao, Rui
    Fu, Zehua
    Liu, Qingjie
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,