Self-Supervised Pretraining of Transformers for Satellite Image Time Series Classification

被引:111
|
作者
Yuan, Yuan [1 ]
Lin, Lei [2 ]
机构
[1] Nanjing Univ Posts & Telecommun, Sch Geog & Biol Informat, Nanjing 210023, Peoples R China
[2] Beijing Qihoo Technol Co Ltd, Beijing 100015, Peoples R China
基金
中国国家自然科学基金;
关键词
Bidirectional encoder representations from Transformers (BERT); classification; satellite image time series (SITS); self-supervised learning; transfer learning; unsupervised pretraining; LAND-COVER CLASSIFICATION; CROP CLASSIFICATION; REPRESENTATION;
D O I
10.1109/JSTARS.2020.3036602
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Satellite image time series (SITS) classification is a major research topic in remote sensing and is relevant for a wide range of applications. Deep learning approaches have been commonly employed for the SITS classification and have provided state-of-the-art performance. However, deep learning methods suffer from overfitting when labeled data are scarce. To address this problem, we propose a novel self-supervised pretraining scheme to initialize a transformer-based network by utilizing large-scale unlabeled data. In detail, the model is asked to predict randomly contaminated observations given an entire time series of a pixel. The main idea of our proposal is to leverage the inherent temporal structure of satellite time series to learn general-purpose spectral-temporal representations related to land cover semantics. Once pretraining is completed, the pretrained network can be further adapted to various SITS classification tasks by fine-tuning all the model parameters on small-scale task-related labeled data. In this way, the general knowledge and representations about SITS can be transferred to a label-scarce task, thereby improving the generalization performance of the model as well as reducing the risk of overfitting. Comprehensive experiments have been carried out on three benchmark datasets over large study areas. Experimental results demonstrate the effectiveness of the proposed pretraining scheme, leading to substantial improvements in classification accuracy using transformer, 1-D convolutional neural network, and bidirectional long short-term memory network. The code and the pretrained model will be available at https://github.com/linlei1214/SITS-BERT upon publication.
引用
收藏
页码:474 / 487
页数:14
相关论文
共 50 条
  • [31] On Separate Normalization in Self-supervised Transformers
    Chen, Xiaohui
    Wang, Yinkai
    Du, Yuanqi
    Hassoun, Soha
    Liu, Li-Ping
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [32] SELF-SUPERVISED DISENTANGLED EMBEDDING FOR ROBUST IMAGE CLASSIFICATION
    Liu, Lanqing
    Duan, Zhenyu
    Xu, Guozheng
    Xu, Yi
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 1494 - 1498
  • [33] Self-Supervised Representation Learning for Document Image Classification
    Siddiqui, Shoaib Ahmed
    Dengel, Andreas
    Ahmed, Sheraz
    IEEE ACCESS, 2021, 9 : 164358 - 164367
  • [34] A Masked Self-Supervised Pretraining Method for Face Parsing
    Li, Zhuang
    Cao, Leilei
    Wang, Hongbin
    Xu, Lihong
    MATHEMATICS, 2022, 10 (12)
  • [35] Heuristic Attention Representation Learning for Self-Supervised Pretraining
    Van Nhiem Tran
    Liu, Shen-Hsuan
    Li, Yung-Hui
    Wang, Jia-Ching
    SENSORS, 2022, 22 (14)
  • [36] Self-supervised Pretraining Isolated Forest for Outlier Detection
    Liang, Dong
    Wang, Jun
    Gao, Xiaoyu
    Wang, Jiahui
    Zhao, Xiaoyong
    Wang, Lei
    2022 INTERNATIONAL CONFERENCE ON BIG DATA, INFORMATION AND COMPUTER NETWORK (BDICN 2022), 2022, : 306 - 310
  • [37] Self-Supervised Pretraining with DICOM metadata in Ultrasound Imaging
    Hu, Szu-Yeu
    Wang, Shuhang
    Weng, Wei-Hung
    Wang, JingChao
    Wang, XiaoHong
    Ozturk, Arinc
    Li, Qian
    Kumar, Viksit
    Samir, Anthony E.
    MACHINE LEARNING FOR HEALTHCARE CONFERENCE, VOL 126, 2020, 126 : 732 - 748
  • [38] How Useful is Self-Supervised Pretraining for Visual Tasks?
    Newell, Alejandro
    Deng, Jia
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 7343 - 7352
  • [39] Self-supervised pretraining improves the performance of classification of task functional magnetic resonance imaging
    Shi, Chenwei
    Wang, Yanming
    Wu, Yueyang
    Chen, Shishuo
    Hu, Rongjie
    Zhang, Min
    Qiu, Bensheng
    Wang, Xiaoxiao
    FRONTIERS IN NEUROSCIENCE, 2023, 17
  • [40] Trajectory Prediction Method Enhanced by Self-supervised Pretraining
    Li, Linhui
    Fu, Yifan
    Wang, Ting
    Wang, Xuecheng
    Lian, Jing
    Qiche Gongcheng/Automotive Engineering, 2024, 46 (07): : 1219 - 1227