Adversarial Examples: Attacks and Defenses for Deep Learning

被引:1076
作者
Yu, Xiaoyong [1 ]
He, Pan [1 ]
Zhu, Qile [1 ]
Li, Xiaolin [1 ]
机构
[1] Univ Florida, Natl Sci Fdn, Ctr Big Learning, Gainesville, FL 32611 USA
基金
美国国家科学基金会;
关键词
Adversarial examples; deep learning (DL); deep neural network (DNN); security; POISONING ATTACKS;
D O I
10.1109/TNNLS.2018.2886017
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With rapid progress and significant successes in a wide spectrum of applications, deep learning is being applied in many safety-critical environments. However, deep neural networks (DNNs) have been recently found vulnerable to well-designed input samples called adversarial examples. Adversarial perturbations are imperceptible to human but can easily fool DNNs in the testing/deploying stage. The vulnerability to adversarial examples becomes one of the major risks for applying DNNs in safety-critical environments. Therefore, attacks and defenses on adversarial examples draw great attention. In this paper, we review recent findings on adversarial examples for DNNs, summarize the methods for generating adversarial examples, and propose a taxonomy of these methods. Under the taxonomy, applications for adversarial examples are investigated. We further elaborate on countermeasures for adversarial examples. In addition, three major challenges in adversarial examples and the potential solutions are discussed.
引用
收藏
页码:2805 / 2824
页数:20
相关论文
共 136 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]  
Abbasi M., 2017, Robustness to adversarial examples through an ensemble of specialists
[3]  
Alfeld S, 2016, AAAI CONF ARTIF INTE, P1452
[4]   DeepDGA: Adversarially-Tuned Domain Generation and Detection [J].
Anderson, Hyrum S. ;
Woodbridge, Jonathan ;
Filar, Bobby .
AISEC'16: PROCEEDINGS OF THE 2016 ACM WORKSHOP ON ARTIFICIAL INTELLIGENCE AND SECURITY, 2016, :13-21
[5]  
Nguyen A, 2015, PROC CVPR IEEE, P427, DOI 10.1109/CVPR.2015.7298640
[6]  
[Anonymous], P NIPS WORKSH
[7]  
[Anonymous], P ICLR WORKSH
[8]  
[Anonymous], P 3 INT C LEARNING R
[9]  
[Anonymous], 2013, P CIARP
[10]  
[Anonymous], 2017, Adversarial examples, uncertainty, and transfer testing robustness in Gaussian process hybrid deep networks