Improve Adversarial Robustness of AI Models in Remote Sensing via Data-Augmentation and Explainable-AI Methods

被引:0
|
作者
Tasneem, Sumaiya [1 ]
Islam, Kazi Aminul [1 ]
机构
[1] Kennesaw State Univ, Dept Comp Sci, Marietta, GA 30060 USA
关键词
deep learning; adversarial attack; adversarial robustness; explainable AI; model interpretability; remote sensing; data augmentation; CLASSIFICATION; BENCHMARK;
D O I
10.3390/rs16173210
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Artificial intelligence (AI) has made remarkable progress in recent years in remote sensing applications, including environmental monitoring, crisis management, city planning, and agriculture. However, the critical challenge in utilizing AI models in real-world remote sensing applications is maintaining their robustness and reliability, particularly against adversarial attacks. In adversarial attacks, attackers manipulate benign data to create a perturbation to mislead AI models into predicting incorrect decisions, posing a catastrophic threat to the security of their applications, particularly in crucial decision-making contexts. These attacks pose a significant threat to the integrity and comprehensiveness of AI models in remote sensing applications, as they can lead to inaccurate decisions with substantial consequences. In this paper, we propose to develop an adversarial robustness technique that will ensure the AI model's accurate prediction in the presence of adversarial perturbation. In this work, we address these challenges by developing a better adversarial training approach using explainable AI method-guided features and data augmentation techniques to strengthen the AI model prediction in remote sensing data against adversarial attacks. The proposed approach achieved the best adversarial robustness against Project Gradient Descent (PGD) attacks in EuroSAT and AID datasets and showed transferability of robustness against unseen attacks.
引用
收藏
页数:17
相关论文
共 6 条
  • [1] Enhancing robustness of AI offensive code generators via data augmentation
    Improta, Cristina
    Liguori, Pietro
    Natella, Roberto
    Cukic, Bojan
    Cotroneo, Domenico
    EMPIRICAL SOFTWARE ENGINEERING, 2025, 30 (01)
  • [2] Cardiovascular Disease Risk Prediction using Retinal Images via Explainable-AI based models with Traditional CVD risk factor estimation
    Kawasaki, Ryo
    Qian, Yiming
    Li, Liangzhi
    Nishida, Kohji
    Nakashima, Yuta
    Nagahara, Hajime
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2022, 63 (07)
  • [3] Interpreting learning models in manufacturing processes: Towards explainable AI methods to improve trust in classifier predictions
    Goldman, Claudia, V
    Baltaxe, Michael
    Chakraborty, Debejyo
    Arinez, Jorge
    Diaz, Carlos Escobar
    JOURNAL OF INDUSTRIAL INFORMATION INTEGRATION, 2023, 33
  • [4] Enhancing hydrogen production prediction from biomass gasification via data augmentation and explainable AI: A comparative analysis
    Ukwuoma, Chiagoziem C.
    Cai, Dongsheng
    Jonathan, Anto Leoba
    Chen, Nuo
    Sey, Collins
    Ntia, Nsikakabasi W.
    Bamisile, Olusola
    Huang, Qi
    INTERNATIONAL JOURNAL OF HYDROGEN ENERGY, 2024, 68 : 755 - 776
  • [5] Enhanced lithological mapping in arid crystalline regions using explainable AI and multi-spectral remote sensing data
    Morgan, Hesham
    Elgendy, Ali
    Said, Amir
    Hashem, Mostafa
    Li, Wenzhao
    Maharjan, Surendra
    El-Askary, Hesham
    COMPUTERS & GEOSCIENCES, 2024, 193
  • [6] MIIDAPS-AI: An Explainable Machine-Learning Algorithm for Infrared and Microwave Remote Sensing and Data Assimilation Preprocessing-Application to LEO and GEO Sensors
    Maddy, Eric S.
    Boukabara, Sid A.
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2021, 14 : 8566 - 8576