Metrics and methods for robustness evaluation of neural networks with generative models

被引:0
|
作者
Igor Buzhinsky
Arseny Nerinovsky
Stavros Tripakis
机构
[1] ITMO University,Computer Technologies Laboratory
[2] Aalto University,Department of Electrical Engineering and Automation
[3] Northeastern University,undefined
来源
Machine Learning | 2023年 / 112卷
关键词
Reliable machine learning; Adversarial examples; Natural adversarial examples; Generative models;
D O I
暂无
中图分类号
学科分类号
摘要
Recent studies have shown that modern deep neural network classifiers are easy to fool, assuming that an adversary is able to slightly modify their inputs. Many papers have proposed adversarial attacks, defenses and methods to measure robustness to such adversarial perturbations. However, most commonly considered adversarial examples are based on perturbations in the input space of the neural network that are unlikely to arise naturally. Recently, especially in computer vision, researchers discovered “natural” perturbations, such as rotations, changes of brightness, or more high-level changes, but these perturbations have not yet been systematically used to measure the performance of classifiers. In this paper, we propose several metrics to measure robustness of classifiers to natural adversarial examples, and methods to evaluate them. These metrics, called latent space performance metrics, are based on the ability of generative models to capture probability distributions. On four image classification case studies, we evaluate the proposed metrics for several classifiers, including ones trained in conventional and robust ways. We find that the latent counterparts of adversarial robustness are associated with the accuracy of the classifier rather than its conventional adversarial robustness, but the latter is still reflected on the properties of found latent perturbations. In addition, our novel method of finding latent adversarial perturbations demonstrates that these perturbations are often perceptually small.
引用
收藏
页码:3977 / 4012
页数:35
相关论文
共 50 条
  • [31] Evaluation of correlation methods applying neural networks
    Wolfgang Holzapfel
    Manfried Sofsky
    Neural Computing & Applications, 2003, 12 : 26 - 32
  • [32] Robustness evaluation of the reliability of penstocks combining line sampling and neural networks
    Ajenjo, Antoine
    Ardillon, Emmanuel
    Chabridon, Vincent
    Cogan, Scott
    Sadoulet-Reboul, Emeline
    RELIABILITY ENGINEERING & SYSTEM SAFETY, 2023, 234
  • [33] Compositional Generative Networks and Robustness to Perceptible Image Changes
    Kortylewski, Adam
    He, Ju
    Liu, Qing
    Cosgrove, Christian
    Yang, Chenglin
    Yuille, Alan L.
    2021 55TH ANNUAL CONFERENCE ON INFORMATION SCIENCES AND SYSTEMS (CISS), 2021,
  • [34] A Method Using Generative Adversarial Networks for Robustness Optimization
    Feldkamp, Niclas
    Bergmann, Soeren
    Conrad, Florian
    Strassburger, Steffen
    ACM TRANSACTIONS ON MODELING AND COMPUTER SIMULATION, 2022, 32 (02):
  • [35] Adversarial Robustness of Flow-Based Generative Models
    Pope, Phillip
    Balaji, Yogesh
    Feizi, Soheil
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 3795 - 3804
  • [36] Bayesian Methods and Generative Models
    Fiser, J.
    PERCEPTION, 2013, 42 : 4 - 5
  • [37] Capsule Networks as Generative Models
    Kiefer, Alex B.
    Millidge, Beren
    Tschantz, Alexander
    Buckley, Christopher L.
    ACTIVE INFERENCE, IWAI 2022, 2023, 1721 : 192 - 209
  • [38] Artifact: Distribution-Aware Testing of Neural Networks Using Generative Models
    Dola, Swaroopa
    Dwyer, Matthew B.
    Soffa, Mary Lou
    2021 IEEE/ACM 43RD INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING: COMPANION PROCEEDINGS (ICSE-COMPANION 2021), 2021, : 205 - 206
  • [39] Exploring generative perspective of convolutional neural networks by learning random field models
    Lu, Yang
    Gao, Ruiqi
    Zhu, Song-Chun
    Wu, Ying Nian
    STATISTICS AND ITS INTERFACE, 2018, 11 (03) : 515 - 529
  • [40] Generalization metrics for practical quantum advantage in generative models
    Gili, Kaitlin
    Mauri, Marta
    Perdomo-Ortiz, Alejandro
    PHYSICAL REVIEW APPLIED, 2024, 21 (04):