Contrastive Self-Supervised Learning for Globally Distributed Landslide Detection

被引:6
|
作者
Ghorbanzadeh, Omid [1 ]
Shahabi, Hejar [2 ]
Piralilou, Sepideh Tavakkoli [3 ]
Crivellari, Alessandro [4 ]
La Rosa, Laura Elena Cue [5 ]
Atzberger, Clement [1 ]
Li, Jonathan [6 ,7 ]
Ghamisi, Pedram [8 ]
机构
[1] Univ Nat Resources & Life Sci BOKU, Inst Geomat, A-1190 Vienna, Austria
[2] INRS, Ctr Eau Terre Environm, Quebec City, PQ G1K 9A9, Canada
[3] IARAI, A-1030 Vienna, Austria
[4] Natl Taiwan Univ, Dept Geog, Taipei 106319, Taiwan
[5] Wageningen Univ & Res, Lab Geoinformat Sci & Remote Sensing, NL-6708 PB Wageningen, Netherlands
[6] Univ Waterloo, Dept Geog & Environm Management, Waterloo, ON N2L 3G1, Canada
[7] Univ Waterloo, Dept Syst Design Engn, Waterloo, ON N2L 3G1, Canada
[8] Helmholtz Inst Freiberg Resource Technol, Helmholtz Zent Dresden Rossendorf, Freiberg, Germany
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Terrain factors; Feature extraction; Data models; Codes; Decoding; Benchmark testing; Deep learning; Landslides; Detection algorithms; Remote sensing; Hazardous areas; landslide detection; multispectral imagery; natural hazard; remote sensing;
D O I
10.1109/ACCESS.2024.3449447
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The Remote Sensing (RS) field continuously grapples with the challenge of transforming satellite data into actionable information. This ongoing issue results in an ever-growing accumulation of unlabeled data, complicating interpretation efforts. The situation becomes even more challenging when satellite data must be used immediately to identify the effects of a natural hazard. Self-supervised learning (SSL) offers a promising approach for learning image representations without labeled data. Once trained, an SSL model can address various tasks with significantly reduced requirements for labeled data. Despite advancements in SSL models, particularly those using contrastive learning methods like MoCo, SimCLR, and SwAV, their potential remains largely unexplored in the context of instance segmentation and semantic segmentation of satellite imagery. This study integrates SwAV within an auto-encoder framework to detect landslides using deca-metric resolution multi-spectral images from the globally-distributed large-scale landslide4sense (L4S) 2022 benchmark dataset, employing only 1% and 10% of the labeled data. Our proposed SSL auto-encoder model features two modules: SwAV, which assigns features to prototype vectors to generate encoder codes, and ResNets, serving as the decoder for the downstream task. With just 1% of labeled data, our SSL model performs comparably to ten state-of-the-art deep learning segmentation models that utilize 100% of the labeled data in a fully supervised manner. With 10% of labeled data, our SSL model outperforms all ten fully supervised counterparts trained with 100% of the labeled data.
引用
收藏
页码:118453 / 118466
页数:14
相关论文
共 50 条
  • [41] Contrastive self-supervised representation learning framework for metal surface defect detection
    Zabin, Mahe
    Kabir, Anika Nahian Binte
    Kabir, Muhammad Khubayeeb
    Choi, Ho-Jin
    Uddin, Jia
    JOURNAL OF BIG DATA, 2023, 10 (01)
  • [42] Anomalous Sub-Trajectory Detection With Graph Contrastive Self-Supervised Learning
    Kong, Xiangjie
    Lin, Hang
    Jiang, Renhe
    Shen, Guojiang
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (07) : 9800 - 9811
  • [43] SWIN transformer based contrastive self-supervised learning for animal detection and classification
    L. Agilandeeswari
    S. Divya Meena
    Multimedia Tools and Applications, 2023, 82 : 10445 - 10470
  • [44] CARLA: Self-supervised contrastive representation learning for time series anomaly detection
    Darban, Zahra Zamanzadeh
    Webb, Geoffrey I.
    Pan, Shirui
    Aggarwal, Charu C.
    Salehi, Mahsa
    PATTERN RECOGNITION, 2025, 157
  • [45] Cut-in maneuver detection with self-supervised contrastive video representation learning
    Yagiz Nalcakan
    Yalin Bastanlar
    Signal, Image and Video Processing, 2023, 17 : 2915 - 2923
  • [46] DimCL: Dimensional Contrastive Learning for Improving Self-Supervised Learning
    Nguyen, Thanh
    Pham, Trung Xuan
    Zhang, Chaoning
    Luu, Tung M.
    Vu, Thang
    Yoo, Chang D.
    IEEE ACCESS, 2023, 11 : 21534 - 21545
  • [47] Self-Supervised Contrastive Learning In Spiking Neural Networks
    Bahariasl, Yeganeh
    Kheradpisheh, Saeed Reza
    PROCEEDINGS OF THE 13TH IRANIAN/3RD INTERNATIONAL MACHINE VISION AND IMAGE PROCESSING CONFERENCE, MVIP, 2024, : 181 - 185
  • [48] Self-supervised Contrastive Learning for Predicting Game Strategies
    Lee, Young Jae
    Baek, Insung
    Jo, Uk
    Kim, Jaehoon
    Bae, Jinsoo
    Jeong, Keewon
    Kim, Seoung Bum
    INTELLIGENT SYSTEMS AND APPLICATIONS, VOL 1, 2023, 542 : 136 - 147
  • [49] Contrasting Contrastive Self-Supervised Representation Learning Pipelines
    Kotar, Klemen
    Ilharco, Gabriel
    Schmidt, Ludwig
    Ehsani, Kiana
    Mottaghi, Roozbeh
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 9929 - 9939
  • [50] CONTRASTIVE SELF-SUPERVISED LEARNING FOR WIRELESS POWER CONTROL
    Naderializadeh, Navid
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 4965 - 4969