Detecting adversarial examples using image reconstruction differences

被引:0
|
作者
Jiaze Sun
Meng Yi
机构
[1] Xi’an University of Posts and Telecommunications,School of Computer Science and Technology
[2] Shaanxi Key Laboratory of Network Data Analysis and Intelligent Processing,undefined
[3] Xi’an Key Laboratory of Big Data and Intelligent Computing,undefined
来源
Soft Computing | 2023年 / 27卷
关键词
Deep neural networks; Adversarial examples; Detection; Compress and reconstruct; Image reconstruction differences; Random forest;
D O I
暂无
中图分类号
学科分类号
摘要
The adversarial examples (AEs) cause misjudgments and damage the robustness of the DNNs systems. Previous studies have defended against AEs by detecting, but it is challenging to ensure a stable and high performance of detecting AEs, while with a poor false detection. To this end, an AEs detection method named image reconstruction differences (IRD) is proposed to enhance the robustness of DNNs. Firstly, we use an end-to-end Com-Rec network to reconstruct examples with feature compression to expand the distinguishing features. Secondly, propose an image reconstruction differences based on information-theoretic VIF, structural information UQI and spectral information RASE composition to discriminate AEs. Moreover, we introduce the idea of integrated learning to form a strong random forest binary classifier to enhance the performance of detecting AEs. We further validate it through extensive experiments on the MNIST and CIFAR-10 datasets. These experiments demonstrated that the IRD effectively detected AEs and achieved a high average accuracy of 98.33%. Specifically it also performs favorably against the following methods based on Feature Squeezing, Local Intrinsic Dimensionality, Kernel Density and Network Invariance Checking with an average detection rate of 99.54% and a 1.44% average false positive rate.
引用
收藏
页码:7863 / 7877
页数:14
相关论文
共 50 条
  • [1] Detecting adversarial examples using image reconstruction differences
    Sun, Jiaze
    Yi, Meng
    SOFT COMPUTING, 2023, 27 (12) : 7863 - 7877
  • [2] Lyapunov stability for detecting adversarial image examples
    Pedraza, Anibal
    Deniz, Oscar
    Bueno, Gloria
    CHAOS SOLITONS & FRACTALS, 2022, 155
  • [3] Detecting Adversarial Examples through Image Transformation
    Tian, Shixin
    Yang, Guolei
    Cai, Ying
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 4139 - 4146
  • [4] Survey on Detecting and Defending Adversarial Examples for Image Data
    Zhang T.
    Yang K.
    Wei J.
    Liu Y.
    Ning Y.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2022, 59 (06): : 1315 - 1328
  • [5] Detecting Adversarial Examples Using Surrogate Models
    Feldsar, Borna
    Mayer, Rudolf
    Rauber, Andreas
    MACHINE LEARNING AND KNOWLEDGE EXTRACTION, 2023, 5 (04): : 1796 - 1825
  • [6] Detecting Adversarial Examples Using Data Manifolds
    Jha, Susmit
    Jang, Uyeong
    Jha, Somesh
    Jalaian, Brian
    2018 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM 2018), 2018, : 547 - 552
  • [7] Detecting Adversarial Examples via Reconstruction-based Semantic Inconsistency
    Zhang, Chi
    Zhou, Wenbo
    Zhang, Kui
    Zhang, Jie
    Zhang, Weiming
    Yu, Nenghai
    PROCEEDINGS OF THE ACM TURING AWARD CELEBRATION CONFERENCE-CHINA 2024, ACM-TURC 2024, 2024, : 126 - 131
  • [8] Detecting chaos in adversarial examples
    Deniz, Oscar
    Pedraza, Anibal
    Bueno, Gloria
    CHAOS SOLITONS & FRACTALS, 2022, 163
  • [9] ADDITION: Detecting Adversarial Examples With Image-Dependent Noise Reduction
    Wang, Yuchen
    Li, Xiaoguang
    Yang, Li
    Ma, Jianfeng
    Li, Hui
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (03) : 1139 - 1154
  • [10] Feature autoencoder for detecting adversarial examples
    Ye, Hongwei
    Liu, Xiaozhang
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2022, 37 (10) : 7459 - 7477