A Sound Abstraction Method Towards Efficient Neural Networks Verification

被引:0
|
作者
Boudardara, Fateh [1 ]
Boussif, Abderraouf [1 ]
Ghazel, Mohamed [1 ,2 ]
机构
[1] Technol Res Inst Railenium, 180 rue Joseph Louis Lagrange, F-59308 Valenciennes, France
[2] Univ Gustave Eiffel, COSYS ESTAS, 20 rue Elisee Reclus, F-59666 Villeneuve Dascq, France
关键词
Neural network abstraction; model reduction; Neural network verification; Output range computation;
D O I
10.1007/978-3-031-49737-7_6
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
With the increasing application of neural networks (NN) in safety-critical systems, the (formal) verification of NN is becoming more than essential. Although several NN verification techniques have been developed in recent years, these techniques are often limited to small networks and do not scale well to larger NN. The primarily reason for this limitation is the complexity and non-linearity of neural network models. Abstraction and model reduction approaches, that aim to reduce the size of the neural networks and over-approximate their outcomes, have been seen as a promising research direction to help existing verification methods to handle larger models. In this paper, we introduce a model reduction method for neural networks with non-negative activation functions (e.g., ReLU and Sigmoid). The method relies on merging neurons while ensuring that the obtained model (i.e., the abstract model) over-approximates the original one. Concretely, it consists in merging a set of neurons that have positive outgoing weights and substituting it with a single abstract neuron, while ensuring that if a given property holds on the abstract network, it necessarily holds on the original one. In order to assess the efficiency of the approach, we perform an experimental comparison with two existing model reduction methods on the ACAS Xu benchmark. The obtained results show that our approach outperforms both methods in terms of precision and execution time.
引用
收藏
页码:76 / 89
页数:14
相关论文
共 50 条
  • [31] Verification and behavior abstraction towards a tractable verification technique for large distributed systems
    Nitsche, U
    JOURNAL OF SYSTEMS AND SOFTWARE, 1996, 33 (03) : 273 - 285
  • [32] An Efficient Learning Method for RBF Neural Networks
    Pazouki, Maryam
    Wu, Zijun
    Yang, Zhixing
    Moeller, Dietmar P. F.
    2015 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2015,
  • [33] Abstraction in FPGA implementation of neural networks
    Ogrenci, Arif Selcuk
    PROCEEDINGS OF THE 9TH WSEAS INTERNATIONAL CONFERENCE ON NEURAL NETWORKS (NN' 08): ADVANCED TOPICS ON NEURAL NETWORKS, 2008, : 221 - 224
  • [35] Neural networks in higher levels of abstraction
    Hilberg, W
    BIOLOGICAL CYBERNETICS, 1997, 76 (01) : 23 - 40
  • [36] Efficient Verification of Neural Networks against LVM-based Specifications
    Hanspal, Harleen
    Lomuscio, Alessi
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 3894 - 3903
  • [37] Attack-Guided Efficient Robustness Verification of ReLU Neural Networks
    Zhu, Yiwei
    Wang, Feng
    Wan, Wenjie
    Zhang, Min
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [38] Efficient Complete Verification of Neural Networks via Layerwised Splitting and Refinement
    Yin, Banghu
    Chen, Liqian
    Liu, Jiangchao
    Wang, Ji
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2022, 41 (11) : 3898 - 3909
  • [39] Efficient Robustness Verification of the Deep Neural Networks for Smart IoT Devices
    Zhang, Zhaodi
    Liu, Jing
    Zhang, Min
    Sun, Haiying
    COMPUTER JOURNAL, 2022, 65 (11): : 2894 - 2908
  • [40] Rule extraction as a formal method for the verification and validation of neural networks
    Taylor, BJ
    Darrah, MA
    Proceedings of the International Joint Conference on Neural Networks (IJCNN), Vols 1-5, 2005, : 2915 - 2920