Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples

被引:13
|
作者
Tuna, Omer Faruk [1 ]
Catak, Ferhat Ozgur [2 ]
Eskil, M. Taner [1 ]
机构
[1] Isik Univ Istanbul, Istanbul, Turkey
[2] Univ Stavanger Fornebu, Stavanger, Norway
关键词
Multimedia security; Uncertainty; Adversarial machine learning; Deep learning; Loss maximization; NEURAL-NETWORK;
D O I
10.1007/s11042-022-12132-7
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural network (DNN) architectures are considered to be robust to random perturbations. Nevertheless, it was shown that they could be severely vulnerable to slight but carefully crafted perturbations of the input, termed as adversarial samples. In recent years, numerous studies have been conducted in this new area called ``Adversarial Machine Learning" to devise new adversarial attacks and to defend against these attacks with more robust DNN architectures. However, most of the current research has concentrated on utilising model loss function to craft adversarial examples or to create robust models. This study explores the usage of quantified epistemic uncertainty obtained from Monte-Carlo Dropout Sampling for adversarial attack purposes by which we perturb the input to the shifted-domain regions where the model has not been trained on. We proposed new attack ideas by exploiting the difficulty of the target model to discriminate between samples drawn from original and shifted versions of the training data distribution by utilizing epistemic uncertainty of the model. Our results show that our proposed hybrid attack approach increases the attack success rates from 82.59% to 85.14%, 82.96% to 90.13% and 89.44% to 91.06% on MNIST Digit, MNIST Fashion and CIFAR-10 datasets, respectively.
引用
收藏
页码:11479 / 11500
页数:22
相关论文
共 50 条
  • [1] Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples
    Omer Faruk Tuna
    Ferhat Ozgur Catak
    M. Taner Eskil
    Multimedia Tools and Applications, 2022, 81 : 11479 - 11500
  • [2] Detecting Adversarial Samples for Deep Learning Models: A Comparative Study
    Zhang, Shigeng
    Chen, Shuxin
    Liu, Xuan
    Hua, Chengyao
    Wang, Weiping
    Chen, Kai
    Zhang, Jian
    Wang, Jianxin
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2022, 9 (01): : 231 - 244
  • [3] Adversarial Learning Games with Deep Learning Models
    Chivukula, Aneesh Sreevallabh
    Liu, Wei
    2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2017, : 2758 - 2767
  • [4] Epistemic uncertainty quantification in deep learning classification by the Delta method
    Nilsen, Geir K.
    Munthe-Kaas, Antonella Z.
    Skaug, Hans J.
    Brun, Morten
    NEURAL NETWORKS, 2022, 145 : 164 - 176
  • [5] Adversarial Deep Learning Models with Multiple Adversaries
    Chivukula, Aneesh Sreevallabh
    Liu, Wei
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2019, 31 (06) : 1066 - 1079
  • [6] Adversarial Attacks and Defenses for Deep Learning Models
    Li M.
    Jiang P.
    Wang Q.
    Shen C.
    Li Q.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2021, 58 (05): : 909 - 926
  • [7] Leveraging Uncertainty in Adversarial Learning to Improve Deep Learning Based Segmentation
    Javed, Mahed
    Mihaylova, Lyudmila
    2019 SYMPOSIUM ON SENSOR DATA FUSION: TRENDS, SOLUTIONS, APPLICATIONS (SDF 2019), 2019,
  • [8] Learning and optimization under epistemic uncertainty with Bayesian hybrid models
    Eugene, Elvis A.
    Jones, Kyla D.
    Gao, Xian
    Wang, Jialu
    Dowling, Alexander W.
    COMPUTERS & CHEMICAL ENGINEERING, 2023, 179
  • [9] DEEP ADVERSARIAL ACTIVE LEARNING WITH MODEL UNCERTAINTY FOR IMAGE CLASSIFICATION
    Zhu, Zheng
    Wang, Hongxing
    2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 1711 - 1715
  • [10] Certifiable Robustness to Adversarial State Uncertainty in Deep Reinforcement Learning
    Everett, Michael
    Lutjens, Bjorn
    How, Jonathan P.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (09) : 4184 - 4198