Evidence Networks: simple losses for fast, amortized, neural Bayesian model comparison

被引:4
|
作者
Jeffrey, Niall [1 ]
Wandelt, Benjamin D. [2 ,3 ]
机构
[1] UCL, Dept Phys & Astron, Gower St, London, England
[2] Sorbonne Univ, CNRS, IAP, F-75014 Paris, France
[3] Flatiron Inst, Ctr Computat Astrophys, 162 5th Ave, New York, NY USA
来源
关键词
Bayesian model comparison; deep learning; simulation-based inference; applications; INFERENCE;
D O I
10.1088/2632-2153/ad1a4d
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Evidence Networks can enable Bayesian model comparison when state-of-the-art methods (e.g. nested sampling) fail and even when likelihoods or priors are intractable or unknown. Bayesian model comparison, i.e. the computation of Bayes factors or evidence ratios, can be cast as an optimization problem. Though the Bayesian interpretation of optimal classification is well-known, here we change perspective and present classes of loss functions that result in fast, amortized neural estimators that directly estimate convenient functions of the Bayes factor. This mitigates numerical inaccuracies associated with estimating individual model probabilities. We introduce the leaky parity-odd power (l-POP) transform, leading to the novel 'l-POP-Exponential' loss function. We explore neural density estimation for data probability in different models, showing it to be less accurate and scalable than Evidence Networks. Multiple real-world and synthetic examples illustrate that Evidence Networks are explicitly independent of dimensionality of the parameter space and scale mildly with the complexity of the posterior probability density function. This simple yet powerful approach has broad implications for model inference tasks. As an application of Evidence Networks to real-world data we compute the Bayes factor for two models with gravitational lensing data of the Dark Energy Survey. We briefly discuss applications of our methods to other, related problems of model comparison and evaluation in implicit inference settings.
引用
收藏
页数:20
相关论文
共 50 条
  • [31] Fast Evolutionary Neural Architecture Search Based on Bayesian Surrogate Model
    Shi, Rui
    Luo, Jianping
    Liu, Qiqi
    2021 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC 2021), 2021, : 1217 - 1224
  • [32] Bayesian multioutput feedforward neural networks comparison: A conjugate prior approach
    Rossi, V
    Vila, JP
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2006, 17 (01): : 35 - 47
  • [33] A FAST PARTITIONING ALGORITHM AND A COMPARISON OF BINARY FEEDFORWARD NEURAL NETWORKS
    KEIBEK, SAJ
    BARKEMA, GT
    ANDREE, HMA
    SAVENIJE, MHF
    TAAL, A
    EUROPHYSICS LETTERS, 1992, 18 (06): : 555 - 559
  • [34] Computational and Neural Evidence for Altered Fast and Slow Learning from Losses in Problem Gambling
    Iigaya, Kiyohito
    Larsen, Tobias
    Fong, Timothy
    O'Doherty, John P.
    JOURNAL OF NEUROSCIENCE, 2025, 45 (01):
  • [35] Bayesian Networks and Evidence Theory to Model Complex Systems Reliability
    Simon, Ch.
    Weber, Ph.
    Levrat, E.
    JOURNAL OF COMPUTERS, 2007, 2 (01) : 33 - 43
  • [36] Wide Bayesian neural networks have a simple weight posterior: theory and accelerated sampling
    Hron, Jiri
    Novak, Roman
    Pennington, Jeffrey
    Sohl-Dickstein, Jascha
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [37] Bayesian Graph Neural Network for Fast identification of critical nodes in Uncertain Complex Networks
    Munikoti, Sai
    Das, Laya
    Natarajan, Balasubramaniam
    2021 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2021, : 3245 - 3251
  • [38] The Bayesian evidence scheme for regularizing probability-density estimating neural networks
    Husmeier, D
    NEURAL COMPUTATION, 2000, 12 (11) : 2685 - 2717
  • [39] Simple, fast, and flexible framework for matrix completion with infinite width neural networks
    Radhakrishnan, Adityanarayanan
    Stefanakis, George
    Belkin, Mikhail
    Uhler, Caroline
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2022, 119 (16)
  • [40] A simple and fast secondary structure prediction method using hidden neural networks
    Lin, K
    Simossis, VA
    Taylor, WR
    Heringa, J
    BIOINFORMATICS, 2005, 21 (02) : 152 - 159