Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples

被引:13
|
作者
Tuna, Omer Faruk [1 ]
Catak, Ferhat Ozgur [2 ]
Eskil, M. Taner [1 ]
机构
[1] Isik Univ Istanbul, Istanbul, Turkey
[2] Univ Stavanger Fornebu, Stavanger, Norway
关键词
Multimedia security; Uncertainty; Adversarial machine learning; Deep learning; Loss maximization; NEURAL-NETWORK;
D O I
10.1007/s11042-022-12132-7
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural network (DNN) architectures are considered to be robust to random perturbations. Nevertheless, it was shown that they could be severely vulnerable to slight but carefully crafted perturbations of the input, termed as adversarial samples. In recent years, numerous studies have been conducted in this new area called ``Adversarial Machine Learning" to devise new adversarial attacks and to defend against these attacks with more robust DNN architectures. However, most of the current research has concentrated on utilising model loss function to craft adversarial examples or to create robust models. This study explores the usage of quantified epistemic uncertainty obtained from Monte-Carlo Dropout Sampling for adversarial attack purposes by which we perturb the input to the shifted-domain regions where the model has not been trained on. We proposed new attack ideas by exploiting the difficulty of the target model to discriminate between samples drawn from original and shifted versions of the training data distribution by utilizing epistemic uncertainty of the model. Our results show that our proposed hybrid attack approach increases the attack success rates from 82.59% to 85.14%, 82.96% to 90.13% and 89.44% to 91.06% on MNIST Digit, MNIST Fashion and CIFAR-10 datasets, respectively.
引用
收藏
页码:11479 / 11500
页数:22
相关论文
共 50 条
  • [41] Robust deep neural network surrogate models with uncertainty quantification via adversarial training
    Zhang, Lixiang
    Li, Jia
    STATISTICAL ANALYSIS AND DATA MINING, 2023, 16 (03) : 295 - 304
  • [42] Epistemic Comparison, Models of Uncertainty, and the Disjunction Puzzle
    Lassiter, Daniel
    JOURNAL OF SEMANTICS, 2015, 32 (04) : 649 - 684
  • [43] Robustness of on-device Models: Adversarial Attack to Deep Learning Models on Android Apps
    Huang, Yujin
    Hu, Han
    Chen, Chunyang
    2021 IEEE/ACM 43RD INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING: SOFTWARE ENGINEERING IN PRACTICE (ICSE-SEIP 2021), 2021, : 101 - 110
  • [44] Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation
    Ozbulak, Utku
    Van Messem, Arnout
    De Neve, Wesley
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2019, PT II, 2019, 11765 : 300 - 308
  • [45] Metamorphic Detection of Adversarial Examples in Deep Learning Models With Affine Transformations
    Mekala, Rohan Reddy
    Magnusson, Gudjon Einar
    Porter, Adam
    Lindvall, Mikael
    Diep, Madeline
    2019 IEEE/ACM 4TH INTERNATIONAL WORKSHOP ON METAMORPHIC TESTING (MET 2019), 2019, : 55 - 62
  • [46] Adversarial Attacks in Underwater Acoustic Target Recognition with Deep Learning Models
    Feng, Sheng
    Zhu, Xiaoqian
    Ma, Shuqing
    Lan, Qiang
    REMOTE SENSING, 2023, 15 (22)
  • [47] The Impact of Model Variations on the Robustness of Deep Learning Models in Adversarial Settings
    Juraev, Firuz
    Abuhamad, Mohammed
    Woo, Simon S.
    Thiruvathukal, George K.
    Abuhmed, Tamer
    2024 SILICON VALLEY CYBERSECURITY CONFERENCE, SVCC 2024, 2024,
  • [48] Adversarial attack for deep-learning-based fault diagnosis models
    Ge, Yipei
    Wang, Huan
    Liu, Zhiliang
    2021 21ST INTERNATIONAL CONFERENCE ON SOFTWARE QUALITY, RELIABILITY AND SECURITY COMPANION (QRS-C 2021), 2021, : 757 - 761
  • [49] Addressing Adversarial Attacks in IoT Using Deep Learning AI Models
    Bommana, Sesibhushana Rao
    Veeramachaneni, Sreehari
    Ahmed, Syed Ershad
    Srinivas, M. B.
    IEEE ACCESS, 2025, 13 : 50437 - 50449
  • [50] ADVRET: An Adversarial Robustness Evaluating and Testing Platform for Deep Learning Models
    Ren, Fei
    Yang, Yonghui
    Hu, Chi
    Zhou, Yuyao
    Ma, Siyou
    2021 21ST INTERNATIONAL CONFERENCE ON SOFTWARE QUALITY, RELIABILITY AND SECURITY COMPANION (QRS-C 2021), 2021, : 9 - 14