Hyper Nonlocal Priors for Variable Selection in Generalized Linear Models

被引:8
|
作者
Wu, Ho-Hsiang [1 ]
Ferreira, Marco A. R. [2 ]
Elkhouly, Mohamed [2 ]
Ji, Tieming [3 ]
机构
[1] NCI, Biostat Branch, Rockville, MD 20850 USA
[2] Virginia Tech, Dept Stat, Blacksburg, VA 24061 USA
[3] Univ Missouri, Dept Stat, Columbia, MO 65211 USA
来源
SANKHYA-SERIES A-MATHEMATICAL STATISTICS AND PROBABILITY | 2020年 / 82卷 / 01期
基金
美国国家科学基金会;
关键词
Bayesian variable selection; Generalized linear model; Nonlocal prior; Scale mixtures; Variable selection consistency; MAXIMUM-LIKELIHOOD; REGRESSION; CONSISTENCY; BAYES;
D O I
10.1007/s13171-018-0151-9
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
We propose two novel hyper nonlocal priors for variable selection in generalized linear models. To obtain these priors, we first derive two new priors for generalized linear models that combine the Fisher information matrix with the Johnson-Rossell moment and inverse moment priors. We then obtain our hyper nonlocal priors from our nonlocal Fisher information priors by assigning hyperpriors to their scale parameters. As a consequence, the hyper nonlocal priors bring less information on the effect sizes than the Fisher information priors, and thus are very useful in practice whenever the prior knowledge of effect size is lacking. We develop a Laplace integration procedure to compute posterior model probabilities, and we show that under certain regularity conditions the proposed methods are variable selection consistent. We also show that, when compared to local priors, our hyper nonlocal priors lead to faster accumulation of evidence in favor of a true null hypothesis. Simulation studies that consider binomial, Poisson, and negative binomial regression models indicate that our methods select true models with higher success rates than other existing Bayesian methods. Furthermore, the simulation studies show that our methods lead to mean posterior probabilities for the true models that are closer to their empirical success rates. Finally, we illustrate the application of our methods with an analysis of the Pima Indians diabetes dataset.
引用
收藏
页码:147 / 185
页数:39
相关论文
共 50 条
  • [41] Versatile Descent Algorithms for Group Regularization and Variable Selection in Generalized Linear Models
    Helwig, Nathaniel E.
    JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS, 2025, 34 (01) : 239 - 252
  • [42] Variable selection via generalized SELO-penalized linear regression models
    Shi Yue-yong
    Cao Yong-xiu
    Yu Ji-chang
    Jiao Yu-ling
    APPLIED MATHEMATICS-A JOURNAL OF CHINESE UNIVERSITIES SERIES B, 2018, 33 (02) : 145 - 162
  • [43] Variable selection via generalized SELO-penalized linear regression models
    Yue-yong Shi
    Yong-xiu Cao
    Ji-chang Yu
    Yu-ling Jiao
    Applied Mathematics-A Journal of Chinese Universities, 2018, 33 : 145 - 162
  • [44] Feature Screening for High-Dimensional Variable Selection in Generalized Linear Models
    Jiang, Jinzhu
    Shang, Junfeng
    ENTROPY, 2023, 25 (06)
  • [45] Robust and consistent variable selection in high-dimensional generalized linear models
    Avella-Medina, Marco
    Ronchetti, Elvezio
    BIOMETRIKA, 2018, 105 (01) : 31 - 44
  • [46] Variable selection via generalized SELO-penalized linear regression models
    SHI Yue-yong
    CAO Yong-xiu
    YU Ji-chang
    JIAO Yu-ling
    AppliedMathematics:AJournalofChineseUniversities, 2018, 33 (02) : 145 - 162
  • [47] Fast Bayesian variable selection for high dimensional linear models: Marginal solo spike and slab priors
    Chen, Su
    Walker, Stephen G.
    ELECTRONIC JOURNAL OF STATISTICS, 2019, 13 (01): : 284 - 309
  • [48] Power-Expected-Posterior Priors for Generalized Linear Models
    Fouskakis, Dimitris
    Ntzoufras, Ioannis
    Perrakis, Konstantinos
    BAYESIAN ANALYSIS, 2018, 13 (03): : 721 - 748
  • [49] Identifiability, improper priors, and Gibbs sampling for generalized linear models
    Gelfand, AE
    Sahu, K
    JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 1999, 94 (445) : 247 - 253
  • [50] DETERMINANTAL POINT PROCESS PRIORS FOR BAYESIAN VARIABLE SELECTION IN LINEAR REGRESSION
    Kojima, Mutsuki
    Komaki, Fumiyasu
    STATISTICA SINICA, 2016, 26 (01) : 97 - 117