Classification of multi-spectral data with fine-tuning variants of representative models

被引:0
|
作者
T. R. Vijaya Lakshmi
Ch. Venkata Krishna Reddy
Padmavathi Kora
K. Swaraja
K. Meenakshi
Ch. Usha Kumari
L. Pratap Reddy
机构
[1] Mahatma Gandhi Institute of Technology,Department of ECE
[2] Chaitanya Bharathi Institute of Technology,Department of EEE
[3] GRIET,Department of ECE
[4] JNTUH CESTH,Department of ECE
来源
关键词
Multi-spectral data; Tuning variants; Land use and land cover classification; Global average pooling layer; Batch normalization; Feature maps;
D O I
暂无
中图分类号
学科分类号
摘要
Due to rapid urbanization, agriculture drought, and environmental pollution, significant efforts have been focused on land use and land cover (LULC) multi-spectral scene classification. Identifying the changes in land use and land cover can facilitate updating the geographical maps. Besides, the technical challenges in multi-spectral images with implicit deep learning models due to the nature of multi-modal, it tackles real-life issues such as the collection of large-scale high-resolution data. The limited training samples are considered a crucial challenge in LULC deep learning classification as requiring a huge number of training samples to ensure the optimal learning procedure. The present work has focused on considering the fraction of multi-spectral data (EuroSAT data) and evaluated the exemplary CNN architectures such as shallow network (VGG16) and deep network (ResNet152V2) with different tuning variants along with the additional layers prior to classification layer to improve the optimal training of the networks to classify the multi-spectral data. The performance of the thirteen spectral bands of EuroSAT dataset that contain ten scene classes of land use and land cover were analyzed band-wise and combination of spectral bands. For the scene class ‘Sea & lake’ the best accuracy obtained was 96.17% with individual band B08A and 95.7% with Color Infra Red (CIR) band combination. The analysis provided in this work enables the remote sensing research community to boost performance even if the multi-spectral dataset size is small.
引用
收藏
页码:23465 / 23487
页数:22
相关论文
共 50 条
  • [41] Phased Instruction Fine-Tuning for Large Language Models
    Pang, Wei
    Zhou, Chuan
    Zhou, Xiao-Hua
    Wang, Xiaojie
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 5735 - 5748
  • [42] How fine can fine-tuning be? Learning efficient language models
    Radiya-Dixit, Evani
    Wang, Xin
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 2435 - 2442
  • [43] Multi-spectral imaging and analysis for classification of melanoma
    Patwardhan, SV
    Dhawan, AP
    PROCEEDINGS OF THE 26TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY, VOLS 1-7, 2004, 26 : 503 - 506
  • [44] Improve Performance of Fine-tuning Language Models with Prompting
    Yang, Zijian Gyozo
    Ligeti-Nagy, Noenn
    INFOCOMMUNICATIONS JOURNAL, 2023, 15 : 62 - 68
  • [45] HackMentor: Fine-Tuning Large Language Models for Cybersecurity
    Zhang, Jie
    Wen, Hui
    Deng, Liting
    Xin, Mingfeng
    Li, Zhi
    Li, Lun
    Zhu, Hongsong
    Sun, Limin
    2023 IEEE 22ND INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, BIGDATASE, CSE, EUC, ISCI 2023, 2024, : 452 - 461
  • [46] Robust fine-tuning of zero-shot models
    Wortsman, Mitchell
    Ilharco, Gabriel
    Kim, Jong Wook
    Li, Mike
    Kornblith, Simon
    Roelofs, Rebecca
    Lopes, Raphael Gontijo
    Hajishirzi, Hannaneh
    Farhadi, Ali
    Namkoong, Hongseok
    Schmidt, Ludwig
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 7949 - 7961
  • [47] CONVFIT: Conversational Fine-Tuning of Pretrained Language Models
    Vulic, Ivan
    Su, Pei-Hao
    Coope, Sam
    Gerz, Daniela
    Budzianowski, Pawel
    Casanueva, Inigo
    Mrksic, Nikola
    Wen, Tsung-Hsien
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 1151 - 1168
  • [48] Simultaneous paraphrasing and translation by fine-tuning Transformer models
    Chada, Rakesh
    NEURAL GENERATION AND TRANSLATION, 2020, : 198 - 203
  • [49] PETALS: Collaborative Inference and Fine-tuning of Large Models
    Borzunov, Alexander
    Baranchuk, Dmitry
    Dettmers, Tim
    Ryabinin, Max
    Belkada, Younes
    Chumachenko, Artem
    Samygin, Pavel
    Raffel, Colin
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-DEMO 2023, VOL 3, 2023, : 558 - 568
  • [50] Relaxed fine-tuning in models with nonuniversal gaugino masses
    Abe, Hiroyuki
    Kobayashi, Tatsuo
    Omura, Yuji
    PHYSICAL REVIEW D, 2007, 76 (01):