共 6 条
Improve Adversarial Robustness of AI Models in Remote Sensing via Data-Augmentation and Explainable-AI Methods
被引:0
|作者:
Tasneem, Sumaiya
[1
]
Islam, Kazi Aminul
[1
]
机构:
[1] Kennesaw State Univ, Dept Comp Sci, Marietta, GA 30060 USA
关键词:
deep learning;
adversarial attack;
adversarial robustness;
explainable AI;
model interpretability;
remote sensing;
data augmentation;
CLASSIFICATION;
BENCHMARK;
D O I:
10.3390/rs16173210
中图分类号:
X [环境科学、安全科学];
学科分类号:
08 ;
0830 ;
摘要:
Artificial intelligence (AI) has made remarkable progress in recent years in remote sensing applications, including environmental monitoring, crisis management, city planning, and agriculture. However, the critical challenge in utilizing AI models in real-world remote sensing applications is maintaining their robustness and reliability, particularly against adversarial attacks. In adversarial attacks, attackers manipulate benign data to create a perturbation to mislead AI models into predicting incorrect decisions, posing a catastrophic threat to the security of their applications, particularly in crucial decision-making contexts. These attacks pose a significant threat to the integrity and comprehensiveness of AI models in remote sensing applications, as they can lead to inaccurate decisions with substantial consequences. In this paper, we propose to develop an adversarial robustness technique that will ensure the AI model's accurate prediction in the presence of adversarial perturbation. In this work, we address these challenges by developing a better adversarial training approach using explainable AI method-guided features and data augmentation techniques to strengthen the AI model prediction in remote sensing data against adversarial attacks. The proposed approach achieved the best adversarial robustness against Project Gradient Descent (PGD) attacks in EuroSAT and AID datasets and showed transferability of robustness against unseen attacks.
引用
收藏
页数:17
相关论文