Output Range Analysis for Feed-Forward Deep Neural Networks via Linear Programming

被引:0
|
作者
Xu, Zhiwu [1 ]
Liu, Yazheng [1 ]
Qin, Shengchao [1 ]
Ming, Zhong [1 ]
机构
[1] Shenzhen Univ, Coll Comp Sci & Software Engn, Shenzhen 518060, Peoples R China
基金
中国国家自然科学基金;
关键词
Neurons; Neural networks; Encoding; Linear programming; Deep learning; Upper bound; Taylor series; Deep neural networks (DNNs); ELU; linear programming (LP); output range analysis; sigmoid; VERIFICATION;
D O I
10.1109/TR.2022.3209081
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The success of deep neural networks and their potential use in many safety-critical applications has motivated research on formal verification of deep neural networks. A fundamental primitive enabling the formal analysis of neural networks is the output range analysis. Existing approaches on output range analysis either focus on some simple activation functions, such as relu,or compute a relaxed result for some activation functions, such as exponential linear unit elu. In this article, we propose an approach to compute the output range for feed-forward deep neural networks via linear programming. The key idea is to encode the activation functions, such as elu and sigmoid, as linear constraints in term of the line between the left and right end-points of the input range and the tangent lines on some special points in the input range. A strategy to partition the network to get a tighter range is presented. The experimental results show that our approach gets a tighter result than RobustVerifier on elu networks and sigmoid networks. Moreover, our approach performs better than (the linear encodings implemented in) Crown on elu networks with alpha =0.5, 1.0$ and sigmoid networks, and better than CNN-Cert and DeepCert on elu networks with alpha = 0.5 or 1.0. For elu networks with alpha = 2.0, our approach can achieve results that are closed to Crown, CNN-Cert, and DeepCert. Finally, we also found that the network partition helps to achieve a tighter result as well as to improve the efficiency for elu networks.
引用
收藏
页码:1191 / 1205
页数:15
相关论文
共 50 条
  • [1] FAULT TOLERANCE ANALYSIS OF DIGITAL FEED-FORWARD DEEP NEURAL NETWORKS
    Lee, Minjae
    Hwang, Kyuyeon
    Sung, Wonyong
    2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2014,
  • [2] Feed-forward neural networks
    Bebis, George
    Georgiopoulos, Michael
    IEEE Potentials, 1994, 13 (04): : 27 - 31
  • [3] Exploring Edge TPU for deep feed-forward neural networks
    Hosseininoorbin, Seyedehfaezeh
    Layeghy, Siamak
    Kusy, Brano
    Jurdak, Raja
    Portmann, Marius
    INTERNET OF THINGS, 2023, 22
  • [4] Adaptation of single output feed-forward neural networks for forecasting applications
    Ashhab, S
    Proceedings of the Fifth IASTED International Conference on Modelling, Simulation, and Optimization, 2005, : 114 - 119
  • [5] Magnetic hysteresis modeling via feed-forward neural networks
    Serpico, C
    Visone, C
    IEEE TRANSACTIONS ON MAGNETICS, 1998, 34 (03) : 623 - 628
  • [6] Decoding of Polar Code by Using Deep Feed-Forward Neural Networks
    Seo, Jihoon
    Lee, Juyul
    Kim, Keunyoung
    2018 INTERNATIONAL CONFERENCE ON COMPUTING, NETWORKING AND COMMUNICATIONS (ICNC), 2018, : 238 - 242
  • [7] Patterns of synchrony for feed-forward and auto-regulation feed-forward neural networks
    Aguiar, Manuela A. D.
    Dias, Ana Paula S.
    Ferreira, Flora
    CHAOS, 2017, 27 (01)
  • [8] Parallelizable Reachability Analysis Algorithms for Feed-Forward Neural Networks
    Tran, Hoang-Dung
    Musau, Patrick
    Lopez, Diego Manzanas
    Yang, Xiaodong
    Nguyen, Luan Viet
    Xiang, Weiming
    Johnson, Taylor T.
    2019 IEEE/ACM 7TH INTERNATIONAL WORKSHOP ON FORMAL METHODS IN SOFTWARE ENGINEERING (FORMALISE 2019), 2019, : 31 - 40
  • [9] Feed-forward Neural Networks with Trainable Delay
    Ji, Xunbi A.
    Molnar, Tamas G.
    Avedisov, Sergei S.
    Orosz, Gabor
    LEARNING FOR DYNAMICS AND CONTROL, VOL 120, 2020, 120 : 127 - 136
  • [10] On lateral connections in feed-forward neural networks
    Kothari, R
    Agyepong, K
    ICNN - 1996 IEEE INTERNATIONAL CONFERENCE ON NEURAL NETWORKS, VOLS. 1-4, 1996, : 13 - 18