Verification of Neural Networks' Global Robustness

被引:0
|
作者
Kabaha, Anan [1 ]
Cohen, Dana Drachsler [1 ]
机构
[1] Technion, Haifa, Israel
来源
基金
以色列科学基金会;
关键词
Neural Network Verification; Global Robustness; Constrained Optimization;
D O I
10.1145/3649847
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Neural networks are successful in various applications but are also susceptible to adversarial attacks. To show the safety of network classifiers, many verifiers have been introduced to reason about the local robustness of a given input to a given perturbation. While successful, local robustness cannot generalize to unseen inputs. Several works analyze global robustness properties, however, neither can provide a precise guarantee about the cases where a network classifier does not change its classification. In this work, we propose a new global robustness property for classifiers aiming at finding the minimal globally robust bound, which naturally extends the popular local robustness property for classifiers. We introduce VHAGaR, an anytime verifier for computing this bound. VHAGaR relies on three main ideas: encoding the problem as a mixed-integer programming and pruning the search space by identifying dependencies stemming from the perturbation or the network's computation and generalizing adversarial attacks to unknown inputs. We evaluate VHAGaR on several datasets and classifiers and show that, given a three hour timeout, the average gap between the lower and upper bound on the minimal globally robust bound computed by VHAGaR is 1.9, while the gap of an existing global robustness verifier is 154.7. Moreover, VHAGaR is 130.6x faster than this verifier. Our results further indicate that leveraging dependencies and adversarial attacks makes VHAGaR 78.6x faster.
引用
收藏
页数:30
相关论文
共 50 条
  • [31] On the robustness of global exponential stability for hybrid neural networks with noise and delay perturbations
    Jiang, Feng
    Yang, Hua
    Shen, Yi
    NEURAL COMPUTING & APPLICATIONS, 2014, 24 (7-8): : 1497 - 1504
  • [32] Robustness Verification of Deep Neural Networks on High-speed Rail Operating Environment Recognition
    Gao Z.
    Su Y.
    Hou X.
    Fang P.
    Zhang M.
    Tongji Daxue Xuebao/Journal of Tongji University, 2022, 50 (10): : 1405 - 1413
  • [33] Towards robust neural networks via a global and monotonically decreasing robustness training strategy
    Liang, Zhen
    Wu, Taoran
    Liu, Wanwei
    Xue, Bai
    Yang, Wenjing
    Wang, Ji
    Pang, Zhengbin
    FRONTIERS OF INFORMATION TECHNOLOGY & ELECTRONIC ENGINEERING, 2023, 24 (10) : 1375 - 1389
  • [34] Robustness analysis for connection weight matrix of global exponential stability recurrent neural networks
    Zhu, Song
    Shen, Yi
    NEUROCOMPUTING, 2013, 101 : 370 - 374
  • [35] ε-Weakened Robustness of Deep Neural Networks
    Huang, Pei
    Yang, Yuting
    Liu, Minghao
    Jia, Fuqi
    Ma, Feifei
    Zhang, Jian
    PROCEEDINGS OF THE 31ST ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, ISSTA 2022, 2022, : 126 - 138
  • [36] The geometry of robustness in spiking neural networks
    Calaim, Nuno
    Dehmelt, Florian A.
    Goncalves, Pedro J.
    Machens, Christian K.
    ELIFE, 2022, 11
  • [37] Robustness analysis for compact neural networks
    Chen G.
    Peng P.
    Tian Y.
    Zhongguo Kexue Jishu Kexue/Scientia Sinica Technologica, 2022, 52 (05): : 689 - 703
  • [38] Wasserstein distributional robustness of neural networks
    Bai, Xingjian
    He, Guangyi
    Jiang, Yifan
    Obloj, Jan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [39] Probabilistic Robustness Quantification of Neural Networks
    Kishan, Gopi
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 15966 - 15967
  • [40] Noise robustness in multilayer neural networks
    Copelli, M.
    Eichhorn, R.
    Kinouchi, O.
    Biehl, M.
    Europhysics Letters, 37 (06):