Multi-modal Remote Sensing Image Classification for Low Sample Size Data

被引:0
|
作者
He, Qi [1 ]
Lee, Yao [1 ]
Huang, Dongmei [1 ]
He, Shengqi [1 ]
Song, Wei [1 ]
Du, Yanling [1 ]
机构
[1] Shanghai Ocean Univ, Shanghai, Peoples R China
基金
中国国家自然科学基金;
关键词
deep learning; multi-modal; convolution neural network; high level feature fusion; remote sensing classification;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, multiple and heterogeneous remote sensing images have provided a new development opportunity for Earth observation research. Utilizing deep learning to gain the shared representative information between different modalities is important to resolve the problem of geographical region classification. In this paper, a CNN-based multi-modal framework for low-sample-size data classification of remote sensing images is introduced. This method has three main stages. Firstly, features are extracted from high- and low-resolution remote sensing images separately using multiple convolution layers. Then, the two types of features are fused at the fusion algorithm layer. Finally, the fused features are used to train a classifier. The novelty of this method is that not only it considers the complementary relationship between the two modalities, but enhances the value of a small number of samples. Based on our experiments, the proposed model can obtain a state-of-the-art performance, being more accurate than the comparable architectures, such as single- modal LeNet, NanoNets and multi-modal H&L-LeNet that are trained with a double size of samples.
引用
收藏
页数:6
相关论文
共 50 条
  • [21] Multi-label remote sensing classification with self-supervised gated multi-modal transformers
    Liu, Na
    Yuan, Ye
    Wu, Guodong
    Zhang, Sai
    Leng, Jie
    Wan, Lihong
    FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, 2024, 18
  • [22] On the use of Multi-Modal Sensing in Sign Language Classification
    Sharma, Sneha
    Gupta, Rinki
    Kumar, Arun
    2019 6TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING AND INTEGRATED NETWORKS (SPIN), 2019, : 495 - 500
  • [23] Split Learning of Multi-Modal Medical Image Classification
    Ghosh, Bishwamittra
    Wang, Yuan
    Fu, Huazhu
    Wei, Qingsong
    Liu, Yong
    Goh, Rick Siow Mong
    2024 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI 2024, 2024, : 1326 - 1331
  • [24] Image and Encoded Text Fusion for Multi-Modal Classification
    Gallo, I.
    Calefati, A.
    Nawaz, S.
    Janjua, M. K.
    2018 INTERNATIONAL CONFERENCE ON DIGITAL IMAGE COMPUTING: TECHNIQUES AND APPLICATIONS (DICTA), 2018, : 203 - 209
  • [25] Enhancing Image Classification Models with Multi-modal Biomarkers
    Caban, Jesus J.
    Liao, David
    Yao, Jianhua
    Mollura, Daniel J.
    Gochuico, Bernadette
    Yoo, Terry
    MEDICAL IMAGING 2011: COMPUTER-AIDED DIAGNOSIS, 2011, 7963
  • [26] A Multi-Modal Multilingual Benchmark for Document Image Classification
    Fujinuma, Yoshinari
    Varia, Siddharth
    Sankaran, Nishant
    Min, Bonan
    Appalaraju, Srikar
    Vyas, Yogarshi
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 14361 - 14376
  • [27] Ticino: A multi-modal remote sensing dataset for semantic segmentation
    Barbato, Mirko Paolo
    Piccoli, Flavio
    Napoletano, Paolo
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 249
  • [28] Based on Multi-Feature Information Attention Fusion for Multi-Modal Remote Sensing Image Semantic Segmentation
    Zhang, Chongyu
    2021 IEEE INTERNATIONAL CONFERENCE ON MECHATRONICS AND AUTOMATION (IEEE ICMA 2021), 2021, : 71 - 76
  • [29] Multi-Stage Fusion and Multi-Source Attention Network for Multi-Modal Remote Sensing Image Segmentation
    Zhao, Jiaqi
    Zhou, Yong
    Shi, Boyu
    Yang, Jingsong
    Zhang, Di
    Yao, Rui
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2021, 12 (06)
  • [30] Multi-Stage Fusion and Multi-Source Attention Network for Multi-Modal Remote Sensing Image Segmentation
    Zhao, Jiaqi
    Zhou, Yong
    Shi, Boyu
    Yang, Jingsong
    Zhang, Di
    Yao, Rui
    ACM Transactions on Intelligent Systems and Technology, 2021, 12 (06):