Decomposing neural networks as mappings of correlation functions

被引:10
|
作者
Fischer, Kirsten [1 ,2 ,3 ,4 ]
Rene, Alexandre [1 ,2 ,3 ,5 ,6 ]
Keup, Christian [1 ,2 ,3 ,4 ]
Layer, Moritz [1 ,2 ,3 ,4 ]
Dahmen, David [1 ,2 ,3 ]
Helias, Moritz [1 ,2 ,3 ,6 ]
机构
[1] Julich Res Ctr, Inst Neurosci & Med INM 6, D-52425 Julich, Germany
[2] Julich Res Ctr, Inst Adv Simulat IAS 6, D-52425 Julich, Germany
[3] Julich Res Ctr, JARA Inst Brain Struct Funct Relationships INM 10, D-52425 Julich, Germany
[4] Rhein Westfal TH Aachen, D-52062 Aachen, Germany
[5] Univ Ottawa, Dept Phys, Ottawa, ON K1N 6N5, Canada
[6] Rhein Westfal TH Aachen, Dept Phys, Fac 1, D-52074 Aachen, Germany
来源
PHYSICAL REVIEW RESEARCH | 2022年 / 4卷 / 04期
关键词
DEEP;
D O I
10.1103/PhysRevResearch.4.043143
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
Understanding the functional principles of information processing in deep neural networks continues to be a challenge, in particular for networks with trained and thus nonrandom weights. To address this issue, we study the mapping between probability distributions implemented by a deep feed-forward network. We characterize this mapping as an iterated transformation of distributions, where the nonlinearity in each layer transfers information between different orders of correlation functions. This allows us to identify essential statistics in the data, as well as different information representations that can be used by neural networks. Applied to an XOR task and to MNIST, we show that correlations up to second order predominantly capture the information processing in the internal layers, while the input layer also extracts higher-order correlations from the data. This analysis provides a quantitative and explainable perspective on classification.
引用
收藏
页数:23
相关论文
共 50 条
  • [41] Wavelets as activation functions in Neural Networks
    Herrera, Oscar
    Priego, Belem
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2022, 42 (05) : 4345 - 4355
  • [42] Recognizing Functions in Binaries with Neural Networks
    Shin, Eui Chul Richard
    Song, Dawn
    Moazzezi, Reza
    PROCEEDINGS OF THE 24TH USENIX SECURITY SYMPOSIUM, 2015, : 611 - 626
  • [43] Generalised transfer functions of neural networks
    Fung, CF
    Billings, SA
    Zhang, H
    MECHANICAL SYSTEMS AND SIGNAL PROCESSING, 1997, 11 (06) : 843 - 868
  • [44] Neural networks and rational McNaughton functions
    Amato, P
    Di Nola, A
    Gerla, B
    JOURNAL OF MULTIPLE-VALUED LOGIC AND SOFT COMPUTING, 2005, 11 (1-2) : 95 - 110
  • [45] Approximation by ridge functions and neural networks
    Petrushev, PP
    SIAM JOURNAL ON MATHEMATICAL ANALYSIS, 1998, 30 (01) : 155 - 189
  • [46] Generation of Lyapunov functions by neural networks
    Noroozi, Navid
    Karimaghaee, Paknoosh
    Safaei, Fatemeh
    Javadi, Hamed
    WORLD CONGRESS ON ENGINEERING 2008, VOLS I-II, 2008, : 61 - +
  • [47] Approximation by neural networks with sigmoidal functions
    Yu, Dan Sheng
    ACTA MATHEMATICA SINICA-ENGLISH SERIES, 2013, 29 (10) : 2013 - 2026
  • [48] Neural networks for determining affinity functions
    Dmitrienko, Valery Dmitrievich
    Zakovorotniy, Alexander Yurievich
    Leonov, Sergey Yurievich
    2ND INTERNATIONAL CONGRESS ON HUMAN-COMPUTER INTERACTION, OPTIMIZATION AND ROBOTIC APPLICATIONS (HORA 2020), 2020, : 647 - 651
  • [49] Approximation by Neural Networks with Sigmoidal Functions
    Dan Sheng YU
    ActaMathematicaSinica, 2013, 29 (10) : 2013 - 2026
  • [50] Model Performance Inspection of Deep Neural Networks by Decomposing Bayesian Uncertainty Estimates
    Chen, Xiaoyi
    Zhang, Ni
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,