Dimensionality Reduction of Deep Learning for Earth Observation: Smaller, Faster, Simpler

被引:0
|
作者
Calota, Iulia [1 ,2 ]
Faur, Daniela [1 ]
Datcu, Mihai [1 ]
机构
[1] Univ Politehn Bucuresti, Fac Elect Telecommun & Informat Technol, Res Ctr Space Technol, Dept Appl Elect & Informat Engn, Bucharest 060042, Romania
[2] Infineon Technol Romania, Bucharest 020335, Romania
关键词
Bag-of-Words; deep learning; downsampling; fast learning; histograms; HYPERSPECTRAL IMAGE CLASSIFICATION; LARGE-SCALE; BENCHMARK-ARCHIVE; BIGEARTHNET; RESOLUTION; NETWORKS;
D O I
10.1109/JSTARS.2023.3270384
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
As deep learning attracts earth observation (EO) community's interest, the challenge to derive explainable, actionable information creates a bottleneck in EO models' development. Computer vision proved that effective results of the DL algorithms imply a considerable amount of training datasets. This is not the case in EO, where images are characterized by a broad variety of sensor data, ranging from multispectral to Synthetic Aperture Radar (SAR) with variable number of spectral bands, polarization, or spatial resolution. In this article, we present effective methodologies for fast training with reduced datasets of simple deep neural networks, while preserving the similar performance of state-of-the-art methods. The hybrid solutions we provide imply the reduction of input data dimension in a convolutional neural network. We replaced dataset's patches with histograms of pixel intensity, Bag-of-Words, or downsampled images. Following the proposed approaches, the training time and the dataset size are significantly reduced, while the performance of classification is preserved. These optimized implementations enable the deployment of lightweight deep learning models for real-time processing tasks able to exhibit accurate results, for instance, a disaster management scenario. We demonstrated the computational efficiency of these approaches on various, complex data, both multispectral and SAR, with different resolutions.
引用
收藏
页码:4484 / 4498
页数:15
相关论文
共 50 条
  • [1] Trapdoors for Lattices: Simpler, Tighter, Faster, Smaller
    Micciancio, Daniele
    Peikert, Chris
    ADVANCES IN CRYPTOLOGY - EUROCRYPT 2012, 2012, 7237 : 700 - 718
  • [2] Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better
    Menghani, Gaurav
    ACM COMPUTING SURVEYS, 2023, 55 (12)
  • [3] Faster, Smaller, and Simpler Model for Multiple Facial Attributes Transformation
    Soeseno, Jonathan Hans
    Tan, Daniel Stanley
    Chen, Wen-Yin
    Hua, Kai-Lung
    IEEE ACCESS, 2019, 7 : 36400 - 36412
  • [4] Dimensionality Reduction for Visual Data Mining of Earth Observation Archives
    Griparis, Andreea
    Faur, Daniela
    Datcu, Mihai
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2016, 13 (11) : 1701 - 1705
  • [5] Microblog Dimensionality Reduction-A Deep Learning Approach
    Xu, Lei
    Jiang, Chunxiao
    Ren, Yong
    Chen, Hsiao-Hwa
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2016, 28 (07) : 1779 - 1789
  • [6] Textual data dimensionality reduction - a deep learning approach
    Neetu Kushwaha
    Millie Pant
    Multimedia Tools and Applications, 2020, 79 : 11039 - 11050
  • [7] Deep Learning in Exploring Semantic Relatedness for Microblog Dimensionality Reduction
    Xu, Lei
    Jiang, Chunxiao
    Ren, Yong
    2015 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP), 2015, : 98 - 102
  • [8] Dimensionality Reduction for Image Features using Deep Learning and Autoencoders
    Petscharnig, Stefan
    Lux, Mathias
    Chatzichristofis, Savvas
    PROCEEDINGS OF THE 15TH INTERNATIONAL WORKSHOP ON CONTENT-BASED MULTIMEDIA INDEXING (CBMI), 2017,
  • [9] Stabilizing and Simplifying Sharpened Dimensionality Reduction Using Deep Learning
    Espadoto M.
    Kim Y.
    Trager S.C.
    Roerdink J.B.T.M.
    Telea A.C.
    SN Computer Science, 4 (3)
  • [10] Textual data dimensionality reduction-a deep learning approach
    Kushwaha, Neetu
    Pant, Millie
    MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (15-16) : 11039 - 11050