Assessing Threat of Adversarial Examples on Deep Neural Networks

被引:17
|
作者
Graese, Abigail [1 ]
Rozsa, Andras [1 ]
Boult, Terrance E. [1 ]
机构
[1] Univ Colorado, Vis & Secur Technol VAST Lab, Colorado Springs, CO 80907 USA
基金
美国国家科学基金会;
关键词
D O I
10.1109/ICMLA.2016.44
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks are facing a potential security threat from adversarial examples, inputs that look normal but cause an incorrect classification by the deep neural network. For example, the proposed threat could result in hand- written digits on a scanned check being incorrectly classified but looking normal when humans see them. This research assesses the extent to which adversarial examples pose a security threat, when one considers the normal image acquisition process. This process is mimicked by simulating the transformations that normally occur in acquiring the image in a real world application, such as using a scanner to acquire digits for a check amount or using a camera in an autonomous car. These small transformations negate the effect of the carefully crafted perturbations of adversarial examples, resulting in a correct classification by the deep neural network. Thus just acquiring the image decreases the potential impact of the proposed security threat. We also show that the already widely used process of averaging over multiple crops neutralizes most adversarial examples. Normal preprocessing, such as text binarization, almost completely neutralizes adversarial examples. This is the first paper to show that for text driven classification, adversarial examples are an academic curiosity, not a security threat.
引用
收藏
页码:69 / 74
页数:6
相关论文
共 50 条
  • [41] Simplicial-Map Neural Networks Robust to Adversarial Examples
    Paluzo-Hidalgo, Eduardo
    Gonzalez-Diaz, Rocio
    Gutierrez-Naranjo, Miguel A.
    Heras, Jonathan
    MATHEMATICS, 2021, 9 (02) : 1 - 16
  • [42] NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks
    Li, Yandong
    Li, Lijun
    Wang, Liqiang
    Zhang, Tong
    Gong, Boqing
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [43] Generating Adversarial Examples with Adversarial Networks
    Xiao, Chaowei
    Li, Bo
    Zhu, Jun-Yan
    He, Warren
    Liu, Mingyan
    Song, Dawn
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 3905 - 3911
  • [44] Adversarial robustness improvement for deep neural networks
    Eleftheriadis, Charis
    Symeonidis, Andreas
    Katsaros, Panagiotis
    MACHINE VISION AND APPLICATIONS, 2024, 35 (03)
  • [45] Adversarial image detection in deep neural networks
    Carrara, Fabio
    Falchi, Fabrizio
    Caldelli, Roberto
    Amato, Giuseppe
    Becarelli, Rudy
    MULTIMEDIA TOOLS AND APPLICATIONS, 2019, 78 (03) : 2815 - 2835
  • [46] Adversarial robustness improvement for deep neural networks
    Charis Eleftheriadis
    Andreas Symeonidis
    Panagiotis Katsaros
    Machine Vision and Applications, 2024, 35
  • [47] Disrupting adversarial transferability in deep neural networks
    Wiedeman, Christopher
    Wang, Ge
    PATTERNS, 2022, 3 (05):
  • [48] Adversarial image detection in deep neural networks
    Fabio Carrara
    Fabrizio Falchi
    Roberto Caldelli
    Giuseppe Amato
    Rudy Becarelli
    Multimedia Tools and Applications, 2019, 78 : 2815 - 2835
  • [49] Adversarial Perturbation Defense on Deep Neural Networks
    Zhang, Xingwei
    Zheng, Xiaolong
    Mao, Wenji
    ACM COMPUTING SURVEYS, 2021, 54 (08)
  • [50] ADVERSARIAL WATERMARKING TO ATTACK DEEP NEURAL NETWORKS
    Wang, Gengxing
    Chen, Xinyuan
    Xu, Chang
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 1962 - 1966