Challenges and Opportunities in High-dimensional Variational Inference

被引:0
|
作者
Dhaka, Akash Kumar [1 ,2 ]
Catalina, Alejandro [1 ]
Welandawe, Manushi [3 ]
Andersen, Michael Riis [4 ]
Huggins, Jonathan H. [3 ]
Vehtari, Aki [1 ]
机构
[1] Aalto Univ, Espoo, Finland
[2] Silo AI, Helsinki, Finland
[3] Boston Univ, Boston, MA 02215 USA
[4] Tech Univ Denmark, Lyngby, Denmark
基金
芬兰科学院;
关键词
APPROXIMATION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Current black-box variational inference (BBVI) methods require the user to make numerous design choices-such as the selection of variational objective and approximating family-yet there is little principled guidance on how to do so. We develop a conceptual framework and set of experimental tools to understand the effects of these choices, which we leverage to propose best practices for maximizing posterior approximation accuracy. Our approach is based on studying the pre-asymptotic tail behavior of the density ratios between the joint distribution and the variational approximation, then exploiting insights and tools from the importance sampling literature. Our framework and supporting experiments help to distinguish between the behavior of BBVI methods for approximating low-dimensional versus moderate-to-high-dimensional posteriors. In the latter case, we show that mass-covering variational objectives are difficult to optimize and do not improve accuracy, but flexible variational families can improve accuracy and the effectiveness of importance sampling-at the cost of additional optimization challenges. Therefore, for moderate-to-high-dimensional posteriors we recommend using the (mode-seeking) exclusive KL divergence since it is the easiest to optimize, and improving the variational family or using model parameter transformations to make the posterior and optimal variational approximation more similar. On the other hand, in low-dimensional settings, we show that heavy-tailed variational families and mass-covering divergences are effective and can increase the chances that the approximation can be improved by importance sampling.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Implicit Variational Inference for High-Dimensional Posteriors
    Uppal, Anshuk
    Stensbo-Smidt, Kristoffer
    Boomsma, Wouter
    Frellsen, Jes
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [2] Variational Inference in high-dimensional linear regression
    Mukherjee, Sumit
    Sen, Subhabrata
    JOURNAL OF MACHINE LEARNING RESEARCH, 2022, 23
  • [3] Variational Inference in high-dimensional linear regression
    Mukherjee, Sumit
    Sen, Subhabrata
    Journal of Machine Learning Research, 2022, 23
  • [4] Projective Integral Updates for High-Dimensional Variational Inference
    Duersch, Jed A.
    SIAM-ASA JOURNAL ON UNCERTAINTY QUANTIFICATION, 2024, 12 (01): : 69 - 100
  • [5] Variational Bayesian Inference in High-Dimensional Linear Mixed Models
    Yi, Jieyi
    Tang, Niansheng
    MATHEMATICS, 2022, 10 (03)
  • [6] Online Variational Bayes Inference for High-Dimensional Correlated Data
    Kabisa, Sylvie
    Dunson, David B.
    Morris, Jeffrey S.
    JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS, 2016, 25 (02) : 426 - 444
  • [7] Challenges and opportunities in high-dimensional choice data analyses
    Naik, Prasad
    Wedel, Michel
    Bacon, Lynd
    Bodapati, Anand
    Bradlow, Eric
    Kamakura, Wagner
    Kreulen, Jeffrey
    Lenk, Peter
    Madigan, David M.
    Montgomery, Alan
    MARKETING LETTERS, 2008, 19 (3-4) : 201 - 213
  • [8] Challenges and opportunities in high-dimensional choice data analyses
    Prasad Naik
    Michel Wedel
    Lynd Bacon
    Anand Bodapati
    Eric Bradlow
    Wagner Kamakura
    Jeffrey Kreulen
    Peter Lenk
    David M. Madigan
    Alan Montgomery
    Marketing Letters, 2008, 19 : 201 - 213
  • [9] Variational inference and sparsity in high-dimensional deep Gaussian mixture models
    Lucas Kock
    Nadja Klein
    David J. Nott
    Statistics and Computing, 2022, 32
  • [10] Variational inference and sparsity in high-dimensional deep Gaussian mixture models
    Kock, Lucas
    Klein, Nadja
    Nott, David J.
    STATISTICS AND COMPUTING, 2022, 32 (05)