A Light-Weighted Spectral-Spatial Transformer Model for Hyperspectral Image Classification

被引:4
|
作者
Arshad, Tahir [1 ]
Zhang, Junping [1 ]
机构
[1] Harbin Inst Technol, Sch Elect & Informat Engn, Harbin 150001, Peoples R China
基金
中国国家自然科学基金;
关键词
Convolutional neural network (CNN); hyperspectral image (HSI) classification; lightweight multihead self-attention; vision transformers (ViTs); NETWORKS;
D O I
10.1109/JSTARS.2024.3419070
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Classifying hyperspectral images in remote sensing applications is challenging due to limited training samples and high dimensionality of data. Deep-learning-based methods have recently demonstrated promising results in the classification of HSI. This article presents a proposed methodology for extracting local features and high-level semantic features from HSI input data using a light-weighted spectral-spatial transformer. This approach will allow us to comprehensively examine the spatial and spectral characteristics while reducing the computing expenses. The proposed model integrates lightweight multihead self-attention and residual feedforward modules in order to effectively capture long-range dependencies and address the computational challenges associated with this model. In order to assess the efficiency of the proposed model, we conducted experiments on four publicly available datasets. The obtained experimental results were then compared with those of the existing state-of-the-art models. The proposed model obtains the best classification results in terms of classification accuracy and computational complexity under limited training samples. The overall accuracy of the proposed model achieved 99.91, 98.06, 99.43 and 99.01 on four datasets.
引用
收藏
页码:12008 / 12019
页数:12
相关论文
共 50 条
  • [21] Spectral-Spatial Response for Hyperspectral Image Classification
    Wei, Yantao
    Zhou, Yicong
    Li, Hong
    REMOTE SENSING, 2017, 9 (03):
  • [22] Masked Auto-Encoding Spectral-Spatial Transformer for Hyperspectral Image Classification
    Ibanez, Damian
    Fernandez-Beltran, Ruben
    Pla, Filiberto
    Yokoya, Naoto
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [23] S2IT: Spectral-Spatial Interactive Transformer for Hyperspectral Image Classification
    Wang, Minhui
    Sun, Yaxiu
    Xiang, Jianhong
    Zhong, Yu
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2024, 21
  • [24] MASSFormer: Memory-Augmented Spectral-Spatial Transformer for Hyperspectral Image Classification
    Sun, Le
    Zhang, Hang
    Zheng, Yuhui
    Wu, Zebin
    Ye, Zhonglin
    Zhao, Haixing
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62 : 1 - 15
  • [25] Spectral-Spatial Constraint Hyperspectral Image Classification
    Ji, Rongrong
    Gao, Yue
    Hong, Richang
    Liu, Qiong
    Tao, Dacheng
    Li, Xuelong
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2014, 52 (03): : 1811 - 1824
  • [26] Spectral-Spatial Mamba for Hyperspectral Image Classification
    Huang, Lingbo
    Chen, Yushi
    He, Xin
    REMOTE SENSING, 2024, 16 (13)
  • [27] Weighted residual self-attention graph-based transformer for spectral-spatial hyperspectral image classification
    Zu, Baokai
    Wang, Hongyuan
    Li, Jianqiang
    He, Ziping
    Li, Yafang
    Yin, Zhixian
    INTERNATIONAL JOURNAL OF REMOTE SENSING, 2023, 44 (03) : 852 - 877
  • [28] Supervised Spectral-Spatial Hyperspectral Image Classification With Weighted Markov Random Fields
    Sun, Le
    Wu, Zebin
    Liu, Jianjun
    Xiao, Liang
    Wei, Zhihui
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2015, 53 (03): : 1490 - 1503
  • [29] Hyperspectral image classification using a spectral-spatial sparse coding model
    Oguslu, Ender
    Zhou, Guoqing
    Li, Jiang
    IMAGE AND SIGNAL PROCESSING FOR REMOTE SENSING XIX, 2013, 8892
  • [30] 3D-Convolution Guided Spectral-Spatial Transformer for Hyperspectral Image Classification
    Varahagiri, Shyam
    Sinha, Aryaman
    Dubey, Shiv Ram
    Singh, Satish Kumar
    2024 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI 2024, 2024, : 8 - 14