Amortized Bayesian Model Comparison With Evidential Deep Learning

被引:13
|
作者
Radev, Stefan T. [1 ]
D'Alessandro, Marco [2 ]
Mertens, Ulf K. [1 ]
Voss, Andreas [1 ]
Koethe, Ullrich [3 ]
Buerkner, Paul-Christian [4 ]
机构
[1] Heidelberg Univ, Dept Quantitat Res Methods, D-69117 Heidelberg, Germany
[2] Univ Trento, Dept Psychol & Cognit Sci, I-38122 Trento, Italy
[3] Heidelberg Univ, Visual Learning Lab, IWR, D-69117 Heidelberg, Germany
[4] Aalto Univ, Dept Comp Sci, Espoo 02150, Finland
关键词
Computational modeling; Data models; Bayes methods; Mathematical models; Predictive models; Uncertainty; Numerical models; Bayesian inference; computational and artificial intelligence; machine learning; neural networks; statistical learning; COMPUTATION; CHOICE; PREDICTION; INFERENCE; SELECTION;
D O I
10.1109/TNNLS.2021.3124052
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Comparing competing mathematical models of complex processes is a shared goal among many branches of science. The Bayesian probabilistic framework offers a principled way to perform model comparison and extract useful metrics for guiding decisions. However, many interesting models are intractable with standard Bayesian methods, as they lack a closed-form likelihood function or the likelihood is computationally too expensive to evaluate. In this work, we propose a novel method for performing Bayesian model comparison using specialized deep learning architectures. Our method is purely simulation-based and circumvents the step of explicitly fitting all alternative models under consideration to each observed dataset. Moreover, it requires no hand-crafted summary statistics of the data and is designed to amortize the cost of simulation over multiple models, datasets, and dataset sizes. This makes the method especially effective in scenarios where model fit needs to be assessed for a large number of datasets, so that case-based inference is practically infeasible. Finally, we propose a novel way to measure epistemic uncertainty in model comparison problems. We demonstrate the utility of our method on toy examples and simulated data from nontrivial models from cognitive science and single-cell neuroscience. We show that our method achieves excellent results in terms of accuracy, calibration, and efficiency across the examples considered in this work. We argue that our framework can enhance and enrich model-based analysis and inference in many fields dealing with computational models of natural processes. We further argue that the proposed measure of epistemic uncertainty provides a unique proxy to quantify absolute evidence even in a framework which assumes that the true data-generating model is within a finite set of candidate models.
引用
收藏
页码:4903 / 4917
页数:15
相关论文
共 50 条
  • [21] Evidential Deep Learning for Open Set Action Recognition
    Bao, Wentao
    Yu, Qi
    Kong, Yu
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 13329 - 13338
  • [22] Improving Evidential Deep Learning via Multi-Task Learning
    Oh, Dongpin
    Shin, Bonggun
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 7895 - 7903
  • [23] Cluster-wise Hierarchical Generative Model for Deep Amortized Clustering
    Liu, Huafeng
    Wang, Jiaqi
    Jing, Liping
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 15104 - 15113
  • [24] Amortized Bayesian Optimization over Discrete Spaces
    Rubanova, Yulia
    Dohan, David
    Swersky, Kevin
    Murphy, Kevin P.
    CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE (UAI 2020), 2020, 124 : 769 - 778
  • [25] Deep Bayesian Multimedia Learning
    Chien, Jen-Tzung
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 4791 - 4793
  • [26] A Survey on Bayesian Deep Learning
    Wang, Hao
    Yeung, Dit-Yan
    ACM COMPUTING SURVEYS, 2020, 53 (05)
  • [27] Bayesian Compression for Deep Learning
    Louizos, Christos
    Ullrich, Karen
    Welling, Max
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [28] Deep Learning: A Bayesian Perspective
    Polson, Nicholas G.
    Sokolov, Vadim
    BAYESIAN ANALYSIS, 2017, 12 (04): : 1275 - 1304
  • [29] Deep Learning and Bayesian Methods
    Prosper, Harrison B.
    XIITH QUARK CONFINEMENT AND THE HADRON SPECTRUM, 2017, 137
  • [30] BAYESIAN AND NON-BAYESIAN EVIDENTIAL UPDATING
    KYBURG, HE
    ARTIFICIAL INTELLIGENCE, 1987, 31 (03) : 271 - 293