Comparing Out-of-Sample Performance of Machine Learning Methods to Forecast US GDP Growth

被引:8
|
作者
Chu, Ba [1 ]
Qureshi, Shafiullah [1 ,2 ]
机构
[1] Carleton Univ, Dept Econ, 1125 Colonel By Dr, Ottawa, ON, Canada
[2] NUML, Dept Econ, Islamabad, Pakistan
基金
加拿大自然科学与工程研究理事会;
关键词
Lasso; Ridge regression; Random forest; Boosting algorithms; Artificial neural networks; Dimensionality reduction methods; MIDAS; GDP growth; MIDAS REGRESSIONS; MODELS;
D O I
10.1007/s10614-022-10312-z
中图分类号
F [经济];
学科分类号
02 ;
摘要
We run a 'horse race' among popular forecasting methods, including machine learning (ML) and deep learning (DL) methods, that are employed to forecast U.S. GDP growth. Given the unstable nature of GDP growth data, we implement a recursive forecasting strategy to calculate the out-of-sample performance metrics of forecasts for multiple subperiods. We use three sets of predictors: a large set of 224 predictors [of U.S. GDP growth] taken from a large quarterly macroeconomic database (namely, FRED-QD), a small set of nine strong predictors selected from the large set, and another small set including these nine strong predictors together with a high-frequency business condition index. We then obtain the following three main findings: (1) when forecasting with a large number of predictors with mixed predictive power, density-based ML methods (such as bagging, boosting, or neural networks) can somewhat outperform sparsity-based methods (such as Lasso) for short-horizon forecast, but it is not easy to distinguish the performance of these two types of methods for long-horizon forecast; (2) density-based ML methods tend to perform better with a large set of predictors than with a small subset of strong predictors, especially when it comes to shorter horizon forecast; and (3) parsimonious models using a strong high-frequency predictor can outperform other sophisticated ML and DL models using a large number of low-frequency predictors at least for long-horizon forecast, highlighting the important role of predictors in economic forecasting. We also find that ensemble ML methods (which are the special cases of density-based ML methods) can outperform popular DL methods.
引用
收藏
页码:1567 / 1609
页数:43
相关论文
共 50 条
  • [41] Tuning structure learning algorithms with out-of-sample and resampling strategies
    Chobtham, Kiattikun
    Constantinou, Anthony C.
    KNOWLEDGE AND INFORMATION SYSTEMS, 2024, 66 (08) : 4927 - 4955
  • [42] Learning geodesic metric for out-of-sample extension of isometric embedding
    Li, Chun-Guang
    Guo, Jun
    Nie, Xiangfei
    2006 INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND SECURITY, PTS 1 AND 2, PROCEEDINGS, 2006, : 449 - 452
  • [43] Out-of-sample embedding for manifold learning applied to face recognition
    University of the Basque Country UPV/EHU, San Sebastian, Spain
    不详
    不详
    IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recogn. Workshops, (862-868):
  • [44] Portfolio performance of linear SDF models: an out-of-sample assessment
    Guidolin, Massimo
    Hansen, Erwin
    Lozano-Banda, Martin
    QUANTITATIVE FINANCE, 2018, 18 (08) : 1425 - 1436
  • [45] Stock Return Serial Dependence and Out-of-Sample Portfolio Performance
    DeMiguel, Victor
    Nogales, Francisco J.
    Uppal, Raman
    REVIEW OF FINANCIAL STUDIES, 2014, 27 (04): : 1031 - 1073
  • [46] Connecting the out-of-sample and pre-image problems in kernel methods
    Arias, Pablo
    Randall, Gregory
    Sapiro, Guillermo
    2007 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOLS 1-8, 2007, : 524 - +
  • [47] In-sample confidence bands and out-of-sample forecast bands for time-varying parameters: Some comments
    Forbes, Catherine Scipione
    INTERNATIONAL JOURNAL OF FORECASTING, 2016, 32 (03) : 888 - 890
  • [48] Out-of-sample predictability of gold market volatility: The role of US Nonfarm Payroll?
    Salisu, Afees A.
    Bouri, Elie
    Gupta, Rangan
    QUARTERLY REVIEW OF ECONOMICS AND FINANCE, 2022, 86 : 482 - 488
  • [49] Local and Global Regressive Mapping for Manifold Learning with Out-of-Sample Extrapolation
    Yang, Yi
    Nie, Feiping
    Xiang, Shiming
    Zhuang, Yueting
    Wang, Wenhua
    PROCEEDINGS OF THE TWENTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE (AAAI-10), 2010, : 649 - 654
  • [50] Lest We Forget: Learn from Out-of-Sample Forecast Errors When Optimizing Portfolios
    Barroso, Pedro
    Saxena, Konark
    REVIEW OF FINANCIAL STUDIES, 2022, 35 (03): : 1222 - 1278