Approximate leave-future-out cross-validation for Bayesian time series models

被引:54
|
作者
Burkner, Paul-Christian [1 ]
Gabry, Jonah [2 ,3 ]
Vehtari, Aki [1 ]
机构
[1] Aalto Univ, Dept Comp Sci, Konemiehentie 2, Espoo 02150, Finland
[2] Columbia Univ, Appl Stat Ctr, New York, NY USA
[3] Columbia Univ, ISERP, New York, NY USA
基金
芬兰科学院;
关键词
Time series analysis; cross-Validation; Bayesian inference; pareto Smoothed importance sampling; R PACKAGE;
D O I
10.1080/00949655.2020.1783262
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
One of the common goals of time series analysis is to use the observed series to inform predictions for future observations. In the absence of any actual new data to predict, cross-validation can be used to estimate a model's future predictive accuracy, for instance, for the purpose of model comparison or selection. Exact cross-validation for Bayesian models is often computationally expensive, but approximate cross-validation methods have been developed, most notably methods for leave-one-out cross-validation (LOO-CV). If the actual prediction task is to predict the future given the past, LOO-CV provides an overly optimistic estimate because the information from future observations is available to influence predictions of the past. To properly account for the time series structure, we can use leave-future-out cross-validation (LFO-CV). Like exact LOO-CV, exact LFO-CV requires refitting the model many times to different subsets of the data. Using Pareto smoothed importance sampling, we propose a method for approximating exact LFO-CV that drastically reduces the computational costs while also providing informative diagnostics about the quality of the approximation.
引用
收藏
页码:2499 / 2523
页数:25
相关论文
共 50 条
  • [21] Efficient approximate k-fold and leave-one-out cross-validation for ridge regression
    Meijer, Rosa J.
    Goeman, Jelle J.
    BIOMETRICAL JOURNAL, 2013, 55 (02) : 141 - 155
  • [22] A scalable estimate of the out-of-sample prediction error via approximate leave-one-out cross-validation
    Rad, Kamiar Rahnama
    Maleki, Arian
    JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES B-STATISTICAL METHODOLOGY, 2020, 82 (04) : 965 - 996
  • [23] Erratum to: Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC
    Aki Vehtari
    Andrew Gelman
    Jonah Gabry
    Statistics and Computing, 2017, 27 : 1433 - 1433
  • [24] Cross-validation and predictive metrics in psychological research: Do not leave out the leave-one-out
    Iglesias, Diego
    Sorrel, Miguel A.
    Olmos, Ricardo
    BEHAVIOR RESEARCH METHODS, 2025, 57 (03)
  • [25] Leave-one-out cross-validation is risk consistent for lasso
    Darren Homrighausen
    Daniel J. McDonald
    Machine Learning, 2014, 97 : 65 - 78
  • [26] Model averaging based on leave-subject-out cross-validation
    Gao, Yan
    Zhang, Xinyu
    Wang, Shouyang
    Zou, Guohua
    JOURNAL OF ECONOMETRICS, 2016, 192 (01) : 139 - 151
  • [27] Leave-one-out cross-validation is risk consistent for lasso
    Homrighausen, Darren
    McDonald, Daniel J.
    MACHINE LEARNING, 2014, 97 (1-2) : 65 - 78
  • [28] Cross-validation to select Bayesian hierarchical models in phylogenetics
    Duchene, Sebastian
    Duchene, David A.
    Di Giallonardo, Francesca
    Eden, John-Sebastian
    Geoghegan, Jemma L.
    Holt, Kathryn E.
    Ho, Simon Y. W.
    Holmes, Edward C.
    BMC EVOLUTIONARY BIOLOGY, 2016, 16
  • [29] Cross-validation to select Bayesian hierarchical models in phylogenetics
    Sebastián Duchêne
    David A. Duchêne
    Francesca Di Giallonardo
    John-Sebastian Eden
    Jemma L. Geoghegan
    Kathryn E. Holt
    Simon Y. W. Ho
    Edward C. Holmes
    BMC Evolutionary Biology, 16
  • [30] Bayesian Leave-One-Out Cross Validation Approximations for Gaussian Latent Variable Models
    Vehtari, Aki
    Mononen, Tommi
    Tolvanen, Ville
    Sivula, Tuomas
    Winther, Ole
    JOURNAL OF MACHINE LEARNING RESEARCH, 2016, 17