Evaluating and Improving Adversarial Robustness of Deep Learning Models for Intelligent Vehicle Safety

被引:1
|
作者
Hussain, Manzoor [1 ]
Hong, Jang-Eui [1 ]
机构
[1] Chungbuk Natl Univ, Coll Elect & Comp Engn, Dept Comp Sci, Cheongju 28644, South Korea
基金
新加坡国家研究基金会;
关键词
Perturbation methods; Training; Safety; Prevention and mitigation; Iterative methods; Computational modeling; Accuracy; Transportation; Roads; Adversarial attacks; adversarial defense; autoencoder; deep learning (DL); generative adversarial neural network; robustness; trusted artificial intelligence; ATTACK;
D O I
10.1109/TR.2024.3458805
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning models have proven their effectiveness in intelligent transportation. However, their vulnerability to adversarial attacks poses significant challenges to traffic safety. Therefore, this article presents a novel technique to evaluate and improve the adversarial robustness of the deep learning models. We first proposed a deep-convolutional-autoencoder-based adversarial attack detector that identifies whether or not the input samples are adversarial. It serves as a preliminary step toward adversarial attack mitigation. Second, we developed a conditional generative adversarial neural network (c-GAN) to transform the adversarial images back to their original form to alleviate the adversarial attacks by restoring the integrity of perturbed images. We present a case study on the traffic sign recognition model to validate our approach. The experimental results showed the effectiveness of the adversarial attack mitigator, achieving an average structure similarity index measure of 0.43 on the learning interpretable skills abstractions (LISA)-convolutional neural network (CNN) dataset and 0.38 on the German traffic sign recognition benchmark (GTSRB)-CNN dataset. While evaluating the peak signal noise ratio, the c-GAN model attains an average of 18.65 on the LISA-CNN and 18.05 on the GTSRB-CNN dataset. Ultimately, the proposed method significantly enhanced the average detection accuracy of adversarial traffic signs, elevating it from 72.66% to 98% on the LISA-CNN dataset. In addition, an average of 28% improvement in accuracy was observed on the GTSRB-CNN.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] ADVRET: An Adversarial Robustness Evaluating and Testing Platform for Deep Learning Models
    Ren, Fei
    Yang, Yonghui
    Hu, Chi
    Zhou, Yuyao
    Ma, Siyou
    2021 21ST INTERNATIONAL CONFERENCE ON SOFTWARE QUALITY, RELIABILITY AND SECURITY COMPANION (QRS-C 2021), 2021, : 9 - 14
  • [2] Towards evaluating the robustness of deep diagnostic models by adversarial attack
    Xu, Mengting
    Zhang, Tao
    Li, Zhongnian
    Liu, Mingxia
    Zhang, Daoqiang
    MEDICAL IMAGE ANALYSIS, 2021, 69
  • [3] On the Robustness of Deep Learning Models to Universal Adversarial Attack
    Karim, Rezaul
    Islam, Md Amirul
    Mohammed, Noman
    Bruce, Neil D. B.
    2018 15TH CONFERENCE ON COMPUTER AND ROBOT VISION (CRV), 2018, : 55 - 62
  • [4] Evaluating the Robustness of Deep-Learning Algorithm-Selection Models by Evolving Adversarial Instances
    Hart, Emma
    Renau, Quentin
    Sim, Kevin
    Alissa, Mohamad
    PARALLEL PROBLEM SOLVING FROM NATURE-PPSN XVIII, PT II, PPSN 2024, 2024, 15149 : 121 - 136
  • [5] Evaluating the Robustness of Deep Learning Models against Adversarial Attacks: An Analysis with FGSM, PGD and CW
    Villegas-Ch, William
    Jaramillo-Alcazar, Angel
    Lujan-Mora, Sergio
    BIG DATA AND COGNITIVE COMPUTING, 2024, 8 (01)
  • [6] Improved Robustness and Safety for Autonomous Vehicle Control with Adversarial Reinforcement Learning
    Ma, Xiaobai
    Driggs-Campbell, Katherine
    Kochenderfer, Mykel J.
    2018 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), 2018, : 1665 - 1671
  • [7] The Impact of Model Variations on the Robustness of Deep Learning Models in Adversarial Settings
    Juraev, Firuz
    Abuhamad, Mohammed
    Woo, Simon S.
    Thiruvathukal, George K.
    Abuhmed, Tamer
    2024 SILICON VALLEY CYBERSECURITY CONFERENCE, SVCC 2024, 2024,
  • [8] Adversarial Robustness for Deep Learning-Based Wildfire Prediction Models
    Ide, Ryo
    Yang, Lei
    FIRE-SWITZERLAND, 2025, 8 (02):
  • [9] Improving Robustness of Deep Learning Systems with Fast and Customizable Adversarial Data Generation
    Arici, Mehmet Melih
    Sen, Alper
    THIRD IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE TESTING (AITEST 2021), 2021, : 37 - 38
  • [10] Optimism in the Face of Adversity: Understanding and Improving Deep Learning Through Adversarial Robustness
    Ortiz-Jimenez, Guillermo
    Modas, Apostolos
    Moosavi-Dezfooli, Seyed-Mohsen
    Frossard, Pascal
    PROCEEDINGS OF THE IEEE, 2021, 109 (05) : 635 - 659