Autoencoder networks extract latent variables and encode these variables in their connectomes

被引:0
|
作者
Farrell, Matthew [1 ,2 ]
Recanatesi, Stefano [2 ]
Reid, R. Clay [3 ]
Mihalas, Stefan [3 ]
Shea-Brown, Eric [1 ,2 ,3 ]
机构
[1] Applied Mathematics Department, University of Washington, Seattle,WA, United States
[2] Computational Neuroscience Center, University of Washington, Seattle,WA, United States
[3] Allen Institute for Brain Science, Seattle,WA, United States
关键词
Learning systems;
D O I
暂无
中图分类号
学科分类号
摘要
Advances in electron microscopy and data processing techniques are leading to increasingly large and complete microscale connectomes. At the same time, advances in artificial neural networks have produced model systems that perform comparably rich computations with perfectly specified connectivity. This raises an exciting scientific opportunity for the study of both biological and artificial neural networks: to infer the underlying circuit function from the structure of its connectivity. A potential roadblock, however, is that – even with well constrained neural dynamics – there are in principle many different connectomes that could support a given computation. Here, we define a tractable setting in which the problem of inferring circuit function from circuit connectivity can be analyzed in detail: the function of input compression and reconstruction, in an autoencoder network with a single hidden layer. Here, in general there is substantial ambiguity in the weights that can produce the same circuit function, because largely arbitrary changes to input weights can be undone by applying the inverse modifications to the output weights. However, we use mathematical arguments and simulations to show that adding simple, biologically motivated regularization of connectivity resolves this ambiguity in an interesting way: weights are constrained such that the latent variable structure underlying the inputs can be extracted from the weights by using nonlinear dimensionality reduction methods. © 2021 Elsevier Ltd
引用
收藏
页码:330 / 343
相关论文
共 50 条
  • [1] Autoencoder networks extract latent variables and encode these variables in their connectomes
    Farrell, Matthew
    Recanatesi, Stefano
    Mihalas, Stefan
    Reid, R. Clay
    Shea-Brown, Eric
    NEURAL NETWORKS, 2021, 141 : 330 - 343
  • [2] Different Latent Variables Learning in Variational Autoencoder
    Xu, Qingyang
    Yang, Yiqin
    Wu, Zhe
    Zhang, Li
    2017 4TH INTERNATIONAL CONFERENCE ON INFORMATION, CYBERNETICS AND COMPUTATIONAL SOCIAL SYSTEMS (ICCSS), 2017, : 508 - 511
  • [3] On the dimension of Bayesian networks with latent variables
    Stafeev, SV
    PROBABILISTIC METHODS IN DISCRETE MATHEMATICS, 2002, : 367 - 370
  • [4] Networks as mediating variables: a Bayesian latent space approach
    Chiara Di Maria
    Antonino Abbruzzo
    Gianfranco Lovison
    Statistical Methods & Applications, 2022, 31 : 1015 - 1035
  • [5] Joint Learning of Multiple Differential Networks With Latent Variables
    Le Ou-Yang
    Zhang, Xiao-Fei
    Zhao, Xing-Ming
    Wang, Debby D.
    Wang, Fu Lee
    Lei, Baiying
    Yan, Hong
    IEEE TRANSACTIONS ON CYBERNETICS, 2019, 49 (09) : 3494 - 3506
  • [6] Networks as mediating variables: a Bayesian latent space approach
    Di Maria, Chiara
    Abbruzzo, Antonino
    Lovison, Gianfranco
    STATISTICAL METHODS AND APPLICATIONS, 2022, 31 (04): : 1015 - 1035
  • [7] Symptoms as latent variables
    McFarland, Dennis J.
    Malta, Loretta S.
    BEHAVIORAL AND BRAIN SCIENCES, 2010, 33 (2-3) : 165 - +
  • [8] Correlations reveal the hierarchical organization of biological networks with latent variables
    Haeusler, Stefan
    COMMUNICATIONS BIOLOGY, 2024, 7 (01)
  • [9] A variational approximation for Bayesian networks with discrete and continuous latent variables
    Murphy, KP
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, PROCEEDINGS, 1999, : 457 - 466