Detecting Adversarial Examples through Image Transformation

被引:0
|
作者
Tian, Shixin [1 ]
Yang, Guolei [1 ]
Cai, Ying [1 ]
机构
[1] Iowa State Univ, Dept Comp Sci, Ames, IA 50011 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep Neural Networks (DNNs) have demonstrated remarkable performance in a diverse range of applications. Along with the prevalence of deep learning, it has been revealed that DNNs are vulnerable to attacks. By deliberately crafting adversarial examples, an adversary can manipulate a DNN to generate incorrect outputs, which may lead catastrophic consequences in applications such as disease diagnosis and self-driving cars. In this paper, we propose an effective method to detect adversarial examples in image classification. Our key insight is that adversarial examples are usually sensitive to certain image transformation operations such as rotation and shifting. In contrast, a normal image is generally immune to such operations. We implement this idea of image transformation and evaluate its performance in oblivious attacks. Our experiments with two datasets show that our technique can detect nearly 99% of adversarial examples generated by the state-of-the-art algorithm. In addition to oblivious attacks, we consider the case of white-box attacks. We propose to introduce randomness in the process of image transformation, which can achieve a detection ratio of around 70%.
引用
收藏
页码:4139 / 4146
页数:8
相关论文
共 50 条
  • [1] Lyapunov stability for detecting adversarial image examples
    Pedraza, Anibal
    Deniz, Oscar
    Bueno, Gloria
    CHAOS SOLITONS & FRACTALS, 2022, 155
  • [2] Detecting adversarial examples using image reconstruction differences
    Sun, Jiaze
    Yi, Meng
    SOFT COMPUTING, 2023, 27 (12) : 7863 - 7877
  • [3] Survey on Detecting and Defending Adversarial Examples for Image Data
    Zhang T.
    Yang K.
    Wei J.
    Liu Y.
    Ning Y.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2022, 59 (06): : 1315 - 1328
  • [4] Detecting adversarial examples using image reconstruction differences
    Jiaze Sun
    Meng Yi
    Soft Computing, 2023, 27 : 7863 - 7877
  • [5] Detecting Textual Adversarial Examples through Randomized Substitution and Vote
    Wang, Xiaosen
    Xiong, Yifeng
    He, Kun
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, VOL 180, 2022, 180 : 2056 - 2065
  • [6] Detecting chaos in adversarial examples
    Deniz, Oscar
    Pedraza, Anibal
    Bueno, Gloria
    CHAOS SOLITONS & FRACTALS, 2022, 163
  • [7] ADDITION: Detecting Adversarial Examples With Image-Dependent Noise Reduction
    Wang, Yuchen
    Li, Xiaoguang
    Yang, Li
    Ma, Jianfeng
    Li, Hui
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (03) : 1139 - 1154
  • [8] Feature autoencoder for detecting adversarial examples
    Ye, Hongwei
    Liu, Xiaozhang
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2022, 37 (10) : 7459 - 7477
  • [9] Detecting Adversarial Image Examples in Deep Neural Networks with Adaptive Noise Reduction
    Liang, Bin
    Li, Hongcheng
    Su, Miaoqiang
    Li, Xirong
    Shi, Wenchang
    Wang, Xiaofeng
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2021, 18 (01) : 72 - 85
  • [10] Detecting Overfitting via Adversarial Examples
    Werpachowski, Roman
    Gyorgy, Andras
    Szepesvari, Csaba
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32