Autoencoder networks extract latent variables and encode these variables in their connectomes

被引:0
|
作者
Farrell, Matthew [1 ,2 ]
Recanatesi, Stefano [2 ]
Reid, R. Clay [3 ]
Mihalas, Stefan [3 ]
Shea-Brown, Eric [1 ,2 ,3 ]
机构
[1] Applied Mathematics Department, University of Washington, Seattle,WA, United States
[2] Computational Neuroscience Center, University of Washington, Seattle,WA, United States
[3] Allen Institute for Brain Science, Seattle,WA, United States
关键词
Learning systems;
D O I
暂无
中图分类号
学科分类号
摘要
Advances in electron microscopy and data processing techniques are leading to increasingly large and complete microscale connectomes. At the same time, advances in artificial neural networks have produced model systems that perform comparably rich computations with perfectly specified connectivity. This raises an exciting scientific opportunity for the study of both biological and artificial neural networks: to infer the underlying circuit function from the structure of its connectivity. A potential roadblock, however, is that – even with well constrained neural dynamics – there are in principle many different connectomes that could support a given computation. Here, we define a tractable setting in which the problem of inferring circuit function from circuit connectivity can be analyzed in detail: the function of input compression and reconstruction, in an autoencoder network with a single hidden layer. Here, in general there is substantial ambiguity in the weights that can produce the same circuit function, because largely arbitrary changes to input weights can be undone by applying the inverse modifications to the output weights. However, we use mathematical arguments and simulations to show that adding simple, biologically motivated regularization of connectivity resolves this ambiguity in an interesting way: weights are constrained such that the latent variable structure underlying the inputs can be extracted from the weights by using nonlinear dimensionality reduction methods. © 2021 Elsevier Ltd
引用
收藏
页码:330 / 343
相关论文
共 50 条
  • [31] A general class of latent variable models for ordinal manifest variables with covariate effects on the manifest and latent variables
    Moustaki, I
    BRITISH JOURNAL OF MATHEMATICAL & STATISTICAL PSYCHOLOGY, 2003, 56 : 337 - 357
  • [32] Considerations for Fitting Dynamic Bayesian Networks With Latent Variables: A Monte Carlo Study
    Reichenberg, Ray E.
    Levy, Roy
    Clark, Adam
    APPLIED PSYCHOLOGICAL MEASUREMENT, 2022, 46 (02) : 116 - 135
  • [33] Learning causal networks with latent variables from multivariate information in genomic data
    Verny, Louis
    Sella, Nadir
    Affeldt, Severine
    Singh, Param Priya
    Isambert, Herve
    PLOS COMPUTATIONAL BIOLOGY, 2017, 13 (10)
  • [34] Discrete mixtures in Bayesian networks with hidden variables: a latent time budget example
    Croft, J
    Smith, JQ
    COMPUTATIONAL STATISTICS & DATA ANALYSIS, 2003, 41 (3-4) : 539 - 547
  • [35] Clustering of variables around latent components
    Vigneau, E
    Qannari, EM
    COMMUNICATIONS IN STATISTICS-SIMULATION AND COMPUTATION, 2003, 32 (04) : 1131 - 1150
  • [36] Latent variables and route choice behavior
    Carlo Giacomo Prato
    Shlomo Bekhor
    Cristina Pronello
    Transportation, 2012, 39 : 299 - 319
  • [37] Latent variables in discrete choice experiments
    Rungie, Cam M.
    Coote, Leonard V.
    Louviere, Jordan J.
    JOURNAL OF CHOICE MODELLING, 2012, 5 (3.) : 145 - 156
  • [38] LEARNING MARKOV PROCESSES WITH LATENT VARIABLES
    Higgins, Ayden
    Jochmans, Koen
    ECONOMETRIC THEORY, 2025,
  • [39] Set inference in latent variables models
    Henry, Marc
    Mourifie, Ismael
    ECONOMETRICS JOURNAL, 2013, 16 (01): : S93 - S105
  • [40] Measurement error and latent variables in econometrics
    Maydeu-Olivares, A
    PSYCHOMETRIKA, 2003, 68 (04) : 607 - 609