Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples

被引:13
|
作者
Tuna, Omer Faruk [1 ]
Catak, Ferhat Ozgur [2 ]
Eskil, M. Taner [1 ]
机构
[1] Isik Univ Istanbul, Istanbul, Turkey
[2] Univ Stavanger Fornebu, Stavanger, Norway
关键词
Multimedia security; Uncertainty; Adversarial machine learning; Deep learning; Loss maximization; NEURAL-NETWORK;
D O I
10.1007/s11042-022-12132-7
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural network (DNN) architectures are considered to be robust to random perturbations. Nevertheless, it was shown that they could be severely vulnerable to slight but carefully crafted perturbations of the input, termed as adversarial samples. In recent years, numerous studies have been conducted in this new area called ``Adversarial Machine Learning" to devise new adversarial attacks and to defend against these attacks with more robust DNN architectures. However, most of the current research has concentrated on utilising model loss function to craft adversarial examples or to create robust models. This study explores the usage of quantified epistemic uncertainty obtained from Monte-Carlo Dropout Sampling for adversarial attack purposes by which we perturb the input to the shifted-domain regions where the model has not been trained on. We proposed new attack ideas by exploiting the difficulty of the target model to discriminate between samples drawn from original and shifted versions of the training data distribution by utilizing epistemic uncertainty of the model. Our results show that our proposed hybrid attack approach increases the attack success rates from 82.59% to 85.14%, 82.96% to 90.13% and 89.44% to 91.06% on MNIST Digit, MNIST Fashion and CIFAR-10 datasets, respectively.
引用
收藏
页码:11479 / 11500
页数:22
相关论文
共 50 条
  • [31] Exploring adversarial image attacks on deep learning models in oncology
    Joel, Marina
    Umrao, Sachin
    Chang, Enoch
    Choi, Rachel
    Yang, Daniel
    Gilson, Aidan
    Herbst, Roy
    Krumholz, Harlan
    Aneja, Sanjay
    CLINICAL CANCER RESEARCH, 2021, 27 (05)
  • [32] Detecting and Rectifying Adversarial Images Dealt by Deep Learning Models
    Dhanya, S.
    Panicker, Vinitha J.
    2021 5TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONICS, COMMUNICATION, COMPUTER TECHNOLOGIES AND OPTIMIZATION TECHNIQUES (ICEECCOT), 2021, : 657 - 661
  • [33] Adversarial Attacks on Deep Learning Models of Computer Vision: A Survey
    Ding, Jia
    Xu, Zhiwu
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2020, PT III, 2020, 12454 : 396 - 408
  • [34] Learning to Generate SAR Images With Adversarial Autoencoder
    Song, Qian
    Xu, Feng
    Zhu, Xiao Xiang
    Jin, Ya-Qiu
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [35] Learning to Generate Chairs with Generative Adversarial Nets
    Zamyatin, Evgeny
    Filchenkov, Andrey
    7TH INTERNATIONAL YOUNG SCIENTISTS CONFERENCE ON COMPUTATIONAL SCIENCE, YSC2018, 2018, 136 : 200 - 209
  • [36] Learning to Generate SAR Images with Adversarial Autoencoder
    Song, Qian
    Xu, Feng
    Zhu, Xiao Xiang
    Jin, Ya-Qiu
    IEEE Transactions on Geoscience and Remote Sensing, 2022, 60
  • [37] Deep learning approach to generate offline handwritten signatures based on online samples
    Melo, Victor K. S. L.
    Dantas Bezerra, Byron Leite
    Impedovo, Donato
    Pirlo, Giuseppe
    Lundgren, Antonio
    IET BIOMETRICS, 2019, 8 (03) : 215 - 220
  • [38] Towards Characterizing Adversarial Defects of Deep Learning Software from the Lens of Uncertainty
    Zhang, Xiyue
    Xie, Xiaofei
    Ma, Lei
    Du, Xiaoning
    Hu, Qiang
    Liu, Yang
    Zhao, Jianjun
    Sun, Meng
    2020 ACM/IEEE 42ND INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE 2020), 2020, : 739 - 751
  • [39] Feature-Based Adversarial Training for Deep Learning Models Resistant to Transferable Adversarial Examples
    Ryu, Gwonsang
    Choi, Daeseon
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2022, E105D (05) : 1039 - 1049
  • [40] Exploiting Epistemic Uncertainty of Anatomy Segmentation for Anomaly Detection in Retinal OCT
    Seebock, Philipp
    Orlando, Jose Ignacio
    Schlegl, Thomas
    Waldstein, Sebastian M.
    Bogunovic, Hrvoje
    Klimscha, Sophie
    Langs, Georg
    Schmidt-Erfurth, Ursula
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2020, 39 (01) : 87 - 98