Mappings, dimensionality and reversing out of deep neural networks

被引:2
|
作者
Cui, Zhaofang
Grindrod, Peter
机构
关键词
Degrees of freedom (mechanics) - Embeddings - Multilayer neural networks;
D O I
10.1093/imamat/hxad019
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
We consider a large cloud of vectors formed at each layer of a standard neural network, corresponding to a large number of separate inputs which were presented independently to the classifier. Although the embedding dimension (the total possible degrees of freedom) reduces as we pass through successive layers, from input to output, the actual dimensionality of the point clouds that the layers contain does not necessarily reduce. We argue that this phenomenon may result in a vulnerability to (universal) adversarial attacks (which are small specific perturbations). This analysis requires us to estimate the intrinsic dimension of point clouds (with values between 20 and 200) within embedding spaces of dimension 1000 up to 800,000. This needs some care. If the cloud dimension actually increases from one layer to the next it implies there is some 'volume filling' over-folding, and thus there exist possible small directional perturbations in the latter space that are equivalent to shifting large distances within the former space, thus inviting possibility of universal and imperceptible attacks.
引用
收藏
页码:2 / 11
页数:10
相关论文
共 50 条
  • [31] Interaction of Generalization and Out-of-Distribution Detection Capabilities in Deep Neural Networks
    Aboitiz, Francisco Javier Klaiber
    Legenstein, Robert
    Oezdenizci, Ozan
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PART X, 2023, 14263 : 248 - 259
  • [32] OICSR: Out-In-Channel Sparsity Regularization for Compact Deep Neural Networks
    Li, Jiashi
    Qi, Qi
    Wang, Jingyu
    Ge, Ce
    Li, Yujian
    Yue, Zhangzhang
    Sun, Haifeng
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 7039 - 7048
  • [33] ON THE DESIGN OF FEEDFORWARD NEURAL NETWORKS FOR BINARY MAPPINGS
    TAN, SH
    VANDEWALLE, J
    NEUROCOMPUTING, 1994, 6 (5-6) : 565 - 582
  • [34] Decomposing neural networks as mappings of correlation functions
    Fischer, Kirsten
    Rene, Alexandre
    Keup, Christian
    Layer, Moritz
    Dahmen, David
    Helias, Moritz
    PHYSICAL REVIEW RESEARCH, 2022, 4 (04):
  • [35] Hamiltonian Neural Networks based classifiers and mappings
    Sienko, W.
    Zamojski, D.
    2006 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORK PROCEEDINGS, VOLS 1-10, 2006, : 794 - +
  • [36] DeepView: Visualizing Classification Boundaries of Deep Neural Networks as Scatter Plots Using Discriminative Dimensionality Reduction
    Schulz, Alexander
    Hinder, Fabian
    Hammer, Barbara
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 2305 - 2311
  • [37] Remaining Useful Life Prognosis for Turbofan Engine Using Explainable Deep Neural Networks with Dimensionality Reduction
    Hong, Chang Woo
    Lee, Changmin
    Lee, Kwangsuk
    Ko, Min-Seung
    Kim, Dae Eun
    Hur, Kyeon
    SENSORS, 2020, 20 (22) : 1 - 19
  • [38] A proof that rectified deep neural networks overcome the curse of dimensionality in the numerical approximation of semilinear heat equations
    Hutzenthaler, Martin
    Jentzen, Arnulf
    Kruse, Thomas
    Nguyen, Tuan Anh
    PARTIAL DIFFERENTIAL EQUATIONS AND APPLICATIONS, 2020, 1 (02):
  • [39] MAPPINGS THAT DO NOT REDUCE DIMENSIONALITY
    FEDORCHU.VV
    DOKLADY AKADEMII NAUK SSSR, 1969, 185 (01): : 54 - &
  • [40] Self-calibrating Neural Networks for Dimensionality Reduction
    Chen, Yuansi
    Pehlevan, Cengiz
    Chklovskii, Dmitri B.
    2016 50TH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS AND COMPUTERS, 2016, : 1488 - 1495