Barlow twin self-supervised pre-training for remote sensing change detection

被引:1
|
作者
Feng, Wenqing [1 ]
Tu, Jihui [2 ]
Sun, Chenhao [3 ]
Xu, Wei [1 ,4 ]
机构
[1] Hangzhou Dianzi Univ, Sch Comp Sci, Hangzhou, Peoples R China
[2] Yangtze Univ, Elect & Informat Sch, Jingzhou, Peoples R China
[3] Changsha Univ Sci & Technol, Elect & Informat Engn Sch, Changsha, Peoples R China
[4] Natl Univ Def Technol, Informat Syst & Management Coll, Changsha, Peoples R China
基金
中国国家自然科学基金;
关键词
NETWORKS;
D O I
10.1080/2150704X.2023.2264493
中图分类号
TP7 [遥感技术];
学科分类号
081102 ; 0816 ; 081602 ; 083002 ; 1404 ;
摘要
Remote sensing change detection (CD) methods that rely on supervised deep convolutional neural networks require large-scale labelled data, which is time-consuming and laborious to collect and label, especially for bi-temporal samples containing changed areas. Conversely, acquiring a large volume of unannotated images is relatively easy. Recently, self-supervised contrastive learning has emerged as a promising method for learning from unannotated images, thereby reducing the need for annotation. However, most existing methods employ random values or ImageNet pre-trained models to initialize their encoders and lack prior knowledge tailored to the demands of CD tasks, thus constraining the performance of CD models. To address these challenges, we propose a novel Barlow Twins self-supervised pre-training method for CD (BTSCD), which uses absolute feature differences to directly learn distinct representations associated with changed regions from unlabelled bi-temporal remote sensing images in a self-supervised manner. Experimental results obtained using two publicly available CD datasets demonstrate that our proposed approach exhibits competitive quantitative performance. Moreover, the proposed method achieved final results superior to those of existing state-of-the-art methods.
引用
收藏
页码:1087 / 1099
页数:13
相关论文
共 50 条
  • [1] Self-supervised Pre-training for Mirror Detection
    Lin, Jiaying
    Lau, Rynson W. H.
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 12193 - 12202
  • [2] Self-supervised ECG pre-training
    Liu, Han
    Zhao, Zhenbo
    She, Qiang
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2021, 70
  • [3] Self-supervised VICReg pre-training for Brugada ECG detection
    Ronan, Robert
    Tarabanis, Constantine
    Chinitz, Larry
    Jankelson, Lior
    SCIENTIFIC REPORTS, 2025, 15 (01):
  • [4] FALL DETECTION USING SELF-SUPERVISED PRE-TRAINING MODEL
    Yhdego, Haben
    Audette, Michel
    Paolini, Christopher
    PROCEEDINGS OF THE 2022 ANNUAL MODELING AND SIMULATION CONFERENCE (ANNSIM'22), 2022, : 361 - 371
  • [5] Individualized Stress Mobile Sensing Using Self-Supervised Pre-Training
    Islam, Tanvir
    Washington, Peter
    APPLIED SCIENCES-BASEL, 2023, 13 (21):
  • [6] Self-supervised Pre-training of Text Recognizers
    Kiss, Martin
    Hradis, Michal
    DOCUMENT ANALYSIS AND RECOGNITION-ICDAR 2024, PT IV, 2024, 14807 : 218 - 235
  • [7] Self-supervised Pre-training for Nuclei Segmentation
    Haq, Mohammad Minhazul
    Huang, Junzhou
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT II, 2022, 13432 : 303 - 313
  • [8] EFFECTIVENESS OF SELF-SUPERVISED PRE-TRAINING FOR ASR
    Baevski, Alexei
    Mohamed, Abdelrahman
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 7694 - 7698
  • [9] SDCluster: A clustering based self-supervised pre-training method for semantic segmentation of remote sensing images
    Xu, Hanwen
    Zhang, Chenxiao
    Yue, Peng
    Wang, Kaixuan
    ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2025, 223 : 1 - 14
  • [10] Self-supervised Pre-training with Acoustic Configurations for Replay Spoofing Detection
    Shim, Hye-jin
    Heo, Hee-Soo
    Jung, Jee-weon
    Yu, Ha-Jin
    INTERSPEECH 2020, 2020, : 1091 - 1095