Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples

被引:13
|
作者
Tuna, Omer Faruk [1 ]
Catak, Ferhat Ozgur [2 ]
Eskil, M. Taner [1 ]
机构
[1] Isik Univ Istanbul, Istanbul, Turkey
[2] Univ Stavanger Fornebu, Stavanger, Norway
关键词
Multimedia security; Uncertainty; Adversarial machine learning; Deep learning; Loss maximization; NEURAL-NETWORK;
D O I
10.1007/s11042-022-12132-7
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural network (DNN) architectures are considered to be robust to random perturbations. Nevertheless, it was shown that they could be severely vulnerable to slight but carefully crafted perturbations of the input, termed as adversarial samples. In recent years, numerous studies have been conducted in this new area called ``Adversarial Machine Learning" to devise new adversarial attacks and to defend against these attacks with more robust DNN architectures. However, most of the current research has concentrated on utilising model loss function to craft adversarial examples or to create robust models. This study explores the usage of quantified epistemic uncertainty obtained from Monte-Carlo Dropout Sampling for adversarial attack purposes by which we perturb the input to the shifted-domain regions where the model has not been trained on. We proposed new attack ideas by exploiting the difficulty of the target model to discriminate between samples drawn from original and shifted versions of the training data distribution by utilizing epistemic uncertainty of the model. Our results show that our proposed hybrid attack approach increases the attack success rates from 82.59% to 85.14%, 82.96% to 90.13% and 89.44% to 91.06% on MNIST Digit, MNIST Fashion and CIFAR-10 datasets, respectively.
引用
收藏
页码:11479 / 11500
页数:22
相关论文
共 50 条
  • [21] Exploiting machine learning and deep learning models for misbehavior detection in VANET
    Sultana R.
    Grover J.
    Meghwal J.
    Tripathi M.
    International Journal of Computers and Applications, 2022, 44 (11): : 1024 - 1038
  • [22] Robust adversarial uncertainty quantification for deep learning fine-tuning
    Ahmed, Usman
    Lin, Jerry Chun-Wei
    JOURNAL OF SUPERCOMPUTING, 2023, 79 (10): : 11355 - 11386
  • [23] Quantification of Uncertainty with Adversarial Models
    Schweighofer, Kajetan
    Aichberger, Lukas
    Ielanskyi, Mykyta
    Klambauer, Gunter
    Hochreiter, Sepp
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [24] Auto-DL: A Platform to Generate Deep Learning Models
    Srivastava, Aditya
    Shinde, Tanvi
    Joshi, Raj
    Ansari, Sameer Ahmed
    Giri, Nupur
    SOFT COMPUTING IN DATA SCIENCE, SCDS 2021, 2021, 1489 : 89 - 103
  • [25] Method of Deep Learning Image Compressed Sensing Based on Adversarial Samples
    Wang, Jiliang
    Zhou, Siwang
    Jin, Cancan
    Hunan Daxue Xuebao/Journal of Hunan University Natural Sciences, 2022, 49 (04): : 11 - 17
  • [26] Method to enhance deep learning fault diagnosis by generating adversarial samples
    Cao, Jie
    Ma, Jialin
    Huang, Dailin
    Yu, Ping
    Wang, Jinhua
    Zheng, Kangjie
    APPLIED SOFT COMPUTING, 2022, 116
  • [27] Epistemic Uncertainty, Rival Models, and Closure
    Taylor, C.
    Murnane, R.
    Graf, W.
    Lee, Y.
    NATURAL HAZARDS REVIEW, 2013, 14 (01) : 42 - 51
  • [28] Epistemic Uncertainty Propagation in Power Models
    Gribaudo, Marco
    Pinciroli, Riccardo
    Trivedi, Kishor
    ELECTRONIC NOTES IN THEORETICAL COMPUTER SCIENCE, 2018, 337 : 67 - 86
  • [29] Exploring Security Vulnerabilities of Deep Learning Models by Adversarial Attacks
    Fu, Xiaopeng
    Gu, Zhaoquan
    Han, Weihong
    Qian, Yaguan
    Wang, Bin
    Wireless Communications and Mobile Computing, 2021, 2021
  • [30] Exploring Security Vulnerabilities of Deep Learning Models by Adversarial Attacks
    Fu, Xiaopeng
    Gu, Zhaoquan
    Han, Weihong
    Qian, Yaguan
    Wang, Bin
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2021, 2021