SS-MAE: Spatial–Spectral Masked Autoencoder for Multisource Remote Sensing Image Classification

被引:30
|
作者
Lin, Junyan [1 ]
Gao, Feng [1 ]
Shi, Xiaochen [1 ]
Dong, Junyu [1 ]
Du, Qian [2 ]
机构
[1] Ocean Univ China, Sch Comp Sci & Technol, Qingdao 266100, Peoples R China
[2] Mississippi State Univ, Dept Elect & Comp Engn, Starkville, MS 39762 USA
关键词
Image reconstruction; Feature extraction; Transformers; Image classification; Training; Decoding; Self-supervised learning; Deep learning; hyperspectral image (HSI); masked autoencoder (MAE); multisource data; DECISION FUSION;
D O I
10.1109/TGRS.2023.3331717
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Masked image modeling (MIM) is a highly popular and effective self-supervised learning method for image understanding. The existing MIM-based methods mostly focus on spatial feature modeling, neglecting spectral feature modeling. Meanwhile, the existing MIM-based methods use Transformer for feature extraction, and some local or high-frequency information may get lost. To this end, we propose a spatial-spectral masked autoencoder (SS-MAE) for hyperspectral image (HSI) and light detection and ranging (LiDAR)/synthetic aperture radar (SAR) data joint classification. Specifically, SS-MAE consists of a spatialwise branch and a spectralwise branch. The spatialwise branch masks random patches and reconstructs missing pixels, while the spectralwise branch masks random spectral channels and reconstructs missing channels. Our SS-MAE fully exploits the spatial and spectral representations of the input data. Furthermore, to complement local features in the training stage, we add two lightweight convolutional nerual networks (CNNs) for feature extraction. Both global and local features are taken into account for feature modeling. To demonstrate the effectiveness of the proposed SS-MAE, we conduct extensive experiments on three publicly available datasets. Extensive experiments on three multisource datasets verify the superiority of our SS-MAE compared with several state-of-the-art baselines. The source codes are available at https://github.com/summitgao/SS-MAE.
引用
收藏
页码:1 / 14
页数:14
相关论文
共 50 条
  • [31] Fuzzy contextual classification of multisource remote sensing images
    Binaghi, E
    Madella, P
    Montesano, MG
    Rampini, A
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 1997, 35 (02): : 326 - 340
  • [33] Consensus based classification of multisource remote sensing data
    Benediktsson, JA
    Sveinsson, JR
    MULTIPLE CLASSIFIER SYSTEMS, 2000, 1857 : 280 - 289
  • [34] A Novel Spatial Analysis Method for Remote Sensing Image Classification
    Gao, Jianqiang
    Xu, Lizhong
    NEURAL PROCESSING LETTERS, 2016, 43 (03) : 805 - 821
  • [35] On the classification of remote sensing high spatial resolution image data
    Batista, Marlos Henrique
    Haertel, Victor
    INTERNATIONAL JOURNAL OF REMOTE SENSING, 2010, 31 (20) : 5533 - 5548
  • [36] A Novel Spatial Analysis Method for Remote Sensing Image Classification
    Jianqiang Gao
    Lizhong Xu
    Neural Processing Letters, 2016, 43 : 805 - 821
  • [37] Spectrum and Spatial Invariant Based Remote Sensing Image Classification
    Liu, Jinmei
    Li, Zhongwei
    Wang, Guoyu
    INFORMATION TECHNOLOGY APPLICATIONS IN INDUSTRY, PTS 1-4, 2013, 263-266 : 905 - +
  • [38] Active Learning in the Spatial Domain for Remote Sensing Image Classification
    Stumpf, Andre
    Lachiche, Nicolas
    Malet, Jean-Philippe
    Kerle, Norman
    Puissant, Anne
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2014, 52 (05): : 2492 - 2507
  • [39] Spectral-Spatial Weighted Kernel Manifold Embedded Distribution Alignment for Remote Sensing Image Classification
    Dong, Yanni
    Liang, Tianyang
    Zhang, Yuxiang
    Du, Bo
    IEEE TRANSACTIONS ON CYBERNETICS, 2021, 51 (06) : 3185 - 3197
  • [40] Adversarial Complementary Learning for Multisource Remote Sensing Classification
    Gao, Yunhao
    Zhang, Mengmeng
    Li, Wei
    Song, Xiukai
    Jiang, Xiangyang
    Ma, Yuanqing
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61