Multi-modal Remote Sensing Image Classification for Low Sample Size Data

被引:0
|
作者
He, Qi [1 ]
Lee, Yao [1 ]
Huang, Dongmei [1 ]
He, Shengqi [1 ]
Song, Wei [1 ]
Du, Yanling [1 ]
机构
[1] Shanghai Ocean Univ, Shanghai, Peoples R China
基金
中国国家自然科学基金;
关键词
deep learning; multi-modal; convolution neural network; high level feature fusion; remote sensing classification;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, multiple and heterogeneous remote sensing images have provided a new development opportunity for Earth observation research. Utilizing deep learning to gain the shared representative information between different modalities is important to resolve the problem of geographical region classification. In this paper, a CNN-based multi-modal framework for low-sample-size data classification of remote sensing images is introduced. This method has three main stages. Firstly, features are extracted from high- and low-resolution remote sensing images separately using multiple convolution layers. Then, the two types of features are fused at the fusion algorithm layer. Finally, the fused features are used to train a classifier. The novelty of this method is that not only it considers the complementary relationship between the two modalities, but enhances the value of a small number of samples. Based on our experiments, the proposed model can obtain a state-of-the-art performance, being more accurate than the comparable architectures, such as single- modal LeNet, NanoNets and multi-modal H&L-LeNet that are trained with a double size of samples.
引用
收藏
页数:6
相关论文
共 50 条
  • [41] TransMed: Transformers Advance Multi-Modal Medical Image Classification
    Dai, Yin
    Gao, Yifan
    Liu, Fayu
    DIAGNOSTICS, 2021, 11 (08)
  • [42] Incomplete multi-modal brain image fusion for epilepsy classification
    Zhu, Qi
    Li, Huijie
    Ye, Haizhou
    Zhang, Zhiqiang
    Wang, Ran
    Fan, Zizhu
    Zhang, Daoqiang
    INFORMATION SCIENCES, 2022, 582 : 316 - 333
  • [43] Multitask Collaborative Multi-modal Remote Sensing Target Segmentation Algorithm
    Mao, Xiuhua
    Zhang, Qiang
    Ruan, Hang
    Yang, Yuang
    Dianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology, 2024, 46 (08): : 3363 - 3371
  • [44] Multi-modal remote sensing image fusion method guided by local extremum maps-guided image filter
    Sun, Menghui
    Zhu, Xiaoliang
    Niu, Yunzhen
    Li, Yang
    SIGNAL IMAGE AND VIDEO PROCESSING, 2024, 18 (05) : 4375 - 4383
  • [45] Multi-Modal Fusion Transformer for Visual Question Answering in Remote Sensing
    Siebert, Tim
    Clasen, Kai Norman
    Ravanbakhsh, Mahdyar
    Demir, Beguem
    IMAGE AND SIGNAL PROCESSING FOR REMOTE SENSING XXVIII, 2022, 12267
  • [46] Multi-modal kernel ridge regression for social image classification
    Zhang, Xiaoming
    Chao, Wenhan
    Li, Zhoujun
    Liu, Chunyang
    Li, Rui
    APPLIED SOFT COMPUTING, 2018, 67 : 117 - 125
  • [47] Multi-modal remote perception learning for object sensory data
    Almujally, Nouf Abdullah
    Rafique, Adnan Ahmed
    Al Mudawi, Naif
    Alazeb, Abdulwahab
    Alonazi, Mohammed
    Algarni, Asaad
    Jalal, Ahmad
    Liu, Hui
    FRONTIERS IN NEUROROBOTICS, 2024, 18
  • [48] Multi-modal Extreme Classification
    Mittal, Anshul
    Dahiya, Kunal
    Malani, Shreya
    Ramaswamy, Janani
    Kuruvilla, Seba
    Ajmera, Jitendra
    Chang, Keng-Hao
    Agarwal, Sumeet
    Kar, Purushottam
    Varma, Manik
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 12383 - 12392
  • [49] Exponential Multi-Modal Discriminant Feature Fusion for Small Sample Size
    Zhu, Yanmin
    Peng, Tianhao
    Su, Shuzhi
    IEEE ACCESS, 2022, 10 : 14507 - 14517
  • [50] DeepLight: Reconstructing High-Resolution Observations of Nighttime Light With Multi-Modal Remote Sensing Data
    Zhang, Lixian
    Dong, Runmin
    Yuan, Shuai
    Zhang, Jinxiao
    Chen, Mengxuan
    Zheng, Juepeng
    Fu, Haohuan
    PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 7563 - 7571