Dynamic and Adaptive Self-Training for Semi-Supervised Remote Sensing Image Semantic Segmentation

被引:1
|
作者
Jin, Jidong [1 ,2 ,3 ,4 ]
Lu, Wanxuan [1 ,2 ]
Yu, Hongfeng [1 ,2 ]
Rong, Xuee [1 ,2 ,3 ,4 ]
Sun, Xian [1 ,2 ,3 ,4 ]
Wu, Yirong [1 ,2 ,3 ,4 ]
机构
[1] Chinese Acad Sci, Aerosp Informat Res Inst, Inst Elect, Beijing 100190, Peoples R China
[2] Chinese Acad Sci, Inst Elect, Key Lab Network Informat Syst Technol NIST, Beijing 100190, Peoples R China
[3] Univ Chinese Acad Sci, Beijing 100190, Peoples R China
[4] Univ Chinese Acad Sci, Sch Elect Elect & Commun Engn, Beijing 100190, Peoples R China
基金
中国国家自然科学基金;
关键词
Remote sensing; Semantic segmentation; Transformers; Data models; Training; Semantics; Predictive models; Consistency regularization (CR); remote sensing (RS) image; self-training; semantic segmentation; semisupervised learning (SSL);
D O I
10.1109/TGRS.2024.3407142
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Remote sensing (RS) technology has made remarkable progress, providing a wealth of data for various applications, such as ecological conservation and urban planning. However, the meticulous annotation of this data is labor-intensive, leading to a shortage of labeled data, particularly in tasks like semantic segmentation. Semi-supervised methods, combining consistency regularization (CR) with self-training, offer a solution to efficiently utilize labeled and unlabeled data. However, these methods encounter challenges due to imbalanced data ratios. To tackle these challenges, we introduce a self-training approach named dynamic and adaptive self-training (DAST), which is combined with dynamic pseudo-label sampling (DPS), distribution matching (DM), and adaptive threshold updating (ATU). DPS is tailored to address the issue of class distribution imbalance by giving priority to classes with fewer samples. Meanwhile, DM and ATU aim to reduce distribution disparities by adjusting model predictions across augmented images within the framework of CR, ensuring they align with the actual data distribution. Experimental results on the Potsdam and iSAID datasets demonstrate that DAST effectively balances class distribution, aligns model predictions with data distribution, and stabilizes pseudo-labels, leading to state-of-the-art performance on both datasets. These findings highlight the potential of DAST in overcoming the challenges associated with significant disparities in labeled-to-unlabeled data ratios.
引用
收藏
页码:1 / 1
页数:14
相关论文
共 50 条
  • [31] Self-training guided disentangled adaptation for cross-domain remote sensing image semantic segmentation
    Zhao, Qi
    Lyu, Shuchang
    Zhao, Hongbo
    Liu, Binghao
    Chen, Lijiang
    Cheng, Guangliang
    INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION, 2024, 127
  • [32] Semi-supervised Gait Recognition Based on Self-training
    Li, Yanan
    Yin, Yilong
    Liu, Lili
    Pang, Shaohua
    Yu, Qiuhong
    2012 IEEE NINTH INTERNATIONAL CONFERENCE ON ADVANCED VIDEO AND SIGNAL-BASED SURVEILLANCE (AVSS), 2012, : 288 - 293
  • [33] Semi-supervised self-training of object detection models
    Rosenberg, C
    Hebert, M
    Schneiderman, H
    WACV 2005: SEVENTH IEEE WORKSHOP ON APPLICATIONS OF COMPUTER VISION, PROCEEDINGS, 2005, : 29 - 36
  • [34] Semi-supervised Continual Learning with Meta Self-training
    Ho, Stella
    Liu, Ming
    Du, Lan
    Li, Yunfeng
    Gao, Longxiang
    Gao, Shang
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 4024 - 4028
  • [35] Federated Self-training for Semi-supervised Audio Recognition
    Tsouvalas, Vasileios
    Saeed, Aaqib
    Ozcelebi, Tanir
    ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, 2022, 21 (06)
  • [36] Semi-supervised self-training for decision tree classifiers
    Jafar Tanha
    Maarten van Someren
    Hamideh Afsarmanesh
    International Journal of Machine Learning and Cybernetics, 2017, 8 : 355 - 370
  • [37] SEMI-SUPERVISED FACE RECOGNITION WITH LDA SELF-TRAINING
    Zhao, Xuran
    Evans, Nicholas
    Dugelay, Jean-Luc
    2011 18TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2011,
  • [38] Unsupervised global-local domain adaptation with self-training for remote sensing image semantic segmentation
    Zhang, Junbo
    Li, Zhiyong
    Wang, Mantao
    Li, Kunhong
    INTERNATIONAL JOURNAL OF REMOTE SENSING, 2025, 46 (05) : 2254 - 2284
  • [39] The student-teacher framework guided by self-training and consistency regularization for semi-supervised medical image segmentation
    Li, Boliang
    Xu, Yaming
    Wang, Yan
    Li, Luxiu
    Zhang, Bo
    PLOS ONE, 2024, 19 (04):
  • [40] Local contrastive loss with pseudo-label based self-training for semi-supervised medical image segmentation
    Chaitanya, Krishna
    Erdil, Ertunc
    Karani, Neerav
    Konukoglu, Ender
    MEDICAL IMAGE ANALYSIS, 2023, 87