Evaluating and Improving Adversarial Robustness of Deep Learning Models for Intelligent Vehicle Safety

被引:1
|
作者
Hussain, Manzoor [1 ]
Hong, Jang-Eui [1 ]
机构
[1] Chungbuk Natl Univ, Coll Elect & Comp Engn, Dept Comp Sci, Cheongju 28644, South Korea
基金
新加坡国家研究基金会;
关键词
Perturbation methods; Training; Safety; Prevention and mitigation; Iterative methods; Computational modeling; Accuracy; Transportation; Roads; Adversarial attacks; adversarial defense; autoencoder; deep learning (DL); generative adversarial neural network; robustness; trusted artificial intelligence; ATTACK;
D O I
10.1109/TR.2024.3458805
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning models have proven their effectiveness in intelligent transportation. However, their vulnerability to adversarial attacks poses significant challenges to traffic safety. Therefore, this article presents a novel technique to evaluate and improve the adversarial robustness of the deep learning models. We first proposed a deep-convolutional-autoencoder-based adversarial attack detector that identifies whether or not the input samples are adversarial. It serves as a preliminary step toward adversarial attack mitigation. Second, we developed a conditional generative adversarial neural network (c-GAN) to transform the adversarial images back to their original form to alleviate the adversarial attacks by restoring the integrity of perturbed images. We present a case study on the traffic sign recognition model to validate our approach. The experimental results showed the effectiveness of the adversarial attack mitigator, achieving an average structure similarity index measure of 0.43 on the learning interpretable skills abstractions (LISA)-convolutional neural network (CNN) dataset and 0.38 on the German traffic sign recognition benchmark (GTSRB)-CNN dataset. While evaluating the peak signal noise ratio, the c-GAN model attains an average of 18.65 on the LISA-CNN and 18.05 on the GTSRB-CNN dataset. Ultimately, the proposed method significantly enhanced the average detection accuracy of adversarial traffic signs, elevating it from 72.66% to 98% on the LISA-CNN dataset. In addition, an average of 28% improvement in accuracy was observed on the GTSRB-CNN.
引用
收藏
页数:15
相关论文
共 50 条
  • [31] On the Robustness of Deep Clustering Models: Adversarial Attacks and Defenses
    Chhabra, Anshuman
    Sekhari, Ashwin
    Mohapatra, Prasant
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [32] Adversarial Deep Reinforcement Learning for Improving the Robustness of Multi-agent Autonomous Driving Policies
    Sharif, Aizaz
    Marijan, Dusica
    2022 29TH ASIA-PACIFIC SOFTWARE ENGINEERING CONFERENCE, APSEC, 2022, : 61 - 70
  • [33] Improving Robustness of Deep Learning Based Knee MRI Segmentation: Mixup and Adversarial Domain Adaptation
    Panfilov, Egor
    Tiulpin, Aleksei
    Klein, Stefan
    Nieminen, Miika T.
    Saarakkala, Simo
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, : 450 - 459
  • [34] Towards Improving Robustness of Deep Neural Networks to Adversarial Perturbations
    Amini, Sajjad
    Ghaemmaghami, Shahrokh
    IEEE TRANSACTIONS ON MULTIMEDIA, 2020, 22 (07) : 1889 - 1903
  • [35] Adversarial Robustness in Deep Learning: From Practices to Theories
    Xu, Han
    Li, Yaxin
    Liu, Xiaorui
    Wang, Wentao
    Tang, Jiliang
    KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 4086 - 4087
  • [36] Adversarial Robustness in Deep Learning: Attacks on Fragile Neurons
    Pravin, Chandresh
    Martino, Ivan
    Nicosia, Giuseppe
    Ojha, Varun
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT I, 2021, 12891 : 16 - 28
  • [37] Analyzing the Robustness of Deep Learning Against Adversarial Examples
    Zhao, Jun
    2018 56TH ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON), 2018, : 1060 - 1064
  • [38] Adversarial VQA: A New Benchmark for Evaluating the Robustness of VQA Models
    Li, Linjie
    Lei, Jie
    Gan, Zhe
    Liu, Jingjing
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 2022 - 2031
  • [39] Robustness and Security in Deep Learning: Adversarial Attacks and Countermeasures
    Kaur, Navjot
    Singh, Someet
    Deore, Shailesh Shivaji
    Vidhate, Deepak A.
    Haridas, Divya
    Kosuri, Gopala Varma
    Kolhe, Mohini Ravindra
    JOURNAL OF ELECTRICAL SYSTEMS, 2024, 20 (03) : 1250 - 1257
  • [40] DETECTSEC: Evaluating the robustness of object detection models to adversarial attacks
    Du, Tianyu
    Ji, Shouling
    Wang, Bo
    He, Sirui
    Li, Jinfeng
    Li, Bo
    Wei, Tao
    Jia, Yunhan
    Beyah, Raheem
    Wang, Ting
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2022, 37 (09) : 6463 - 6492