A symbolic execution-based method to perform untargeted attack on feed-forward neural networks

被引:1
|
作者
Nguyen, Duc-Anh [1 ]
Do Minh, Kha [1 ]
Nguyen, Minh Le [2 ]
Hung, Pham Ngoc [1 ]
机构
[1] Vietnam Natl Univ, VNU Univ Engn & Technol VNU UET, 144 Xuanthuy Str, Hanoi 100000, Vietnam
[2] Japan Adv Inst Sci & Technol JAIST, Sch Informat Sci, ASAHIDAI 1-1, Nomi 9231211, Japan
关键词
Symbolic execution; SMT solver; Feed-forward neural network; Robustness; Adversarial example generation; GENERATION;
D O I
10.1007/s10515-022-00345-x
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
DeepCheck is a symbolic execution-based method to attack feed-forward neural networks. However, in the untargeted attack, DeepCheck suffers from a low success rate due to the limitation of preserving neuron activation patterns and the weakness of solving the constraint by SMT solvers. Therefore, this paper proposes a method to improve the success rate of DeepCheck. Compared to DeepCheck, the proposed method has two main differences including (i) does not force to preserve neuron activation patterns and (ii) uses a heuristic solver rather than SMT solvers. The experimental results on MNIST, Fashion-MNIST, and A-Z handwritten alphabets show three promising results. In the 1-pixel attack, while DeepCheck obtains an average of 0.7% success rate, the proposed method could achieve an average of 54.3% success rate. In the n-pixel attack, while DeepCheck obtains an average of at most 16.9% success rate for using the Z3 solver and at most 26.8% for using the SMTInterpol solver, the proposed method achieves an average of at most 98.7% success rate. In terms of solving cost, while the average running time of the proposed heuristic solver is around 0.4 s per attack, the average running time of DeepCheck is usually larger significantly. These results show the effectiveness of the proposed method to deal with the limitation of DeepCheck.
引用
收藏
页数:29
相关论文
共 50 条
  • [1] A symbolic execution-based method to perform untargeted attack on feed-forward neural networks
    Duc-Anh Nguyen
    Kha Do Minh
    Minh Le Nguyen
    Pham Ngoc Hung
    Automated Software Engineering, 2022, 29
  • [2] Feed-forward neural networks
    Bebis, George
    Georgiopoulos, Michael
    IEEE Potentials, 1994, 13 (04): : 27 - 31
  • [3] An improved training method for feed-forward neural networks
    Lendl, M
    Unbehauen, R
    CLASSIFICATION IN THE INFORMATION AGE, 1999, : 320 - 327
  • [4] FFNSL: Feed-Forward Neural-Symbolic Learner
    Cunnington, Daniel
    Law, Mark
    Lobo, Jorge
    Russo, Alessandra
    MACHINE LEARNING, 2023, 112 (2) : 515 - 569
  • [5] FFNSL: Feed-Forward Neural-Symbolic Learner
    Daniel Cunnington
    Mark Law
    Jorge Lobo
    Alessandra Russo
    Machine Learning, 2023, 112 : 515 - 569
  • [6] Patterns of synchrony for feed-forward and auto-regulation feed-forward neural networks
    Aguiar, Manuela A. D.
    Dias, Ana Paula S.
    Ferreira, Flora
    CHAOS, 2017, 27 (01)
  • [7] Feed-forward Neural Networks with Trainable Delay
    Ji, Xunbi A.
    Molnar, Tamas G.
    Avedisov, Sergei S.
    Orosz, Gabor
    LEARNING FOR DYNAMICS AND CONTROL, VOL 120, 2020, 120 : 127 - 136
  • [8] On lateral connections in feed-forward neural networks
    Kothari, R
    Agyepong, K
    ICNN - 1996 IEEE INTERNATIONAL CONFERENCE ON NEURAL NETWORKS, VOLS. 1-4, 1996, : 13 - 18
  • [9] Optimizing dense feed-forward neural networks
    Balderas, Luis
    Lastra, Miguel
    Benitez, Jose M.
    NEURAL NETWORKS, 2024, 171 : 229 - 241
  • [10] Maximizing the margin with Feed-Forward Neural Networks
    Romero, E
    Alquézar, R
    PROCEEDING OF THE 2002 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-3, 2002, : 743 - 748