Estimation in generalised linear mixed models with binary outcomes by simulated maximum likelihood

被引:34
|
作者
Ng, ESW
Carpenter, JR
Goldstein, H
Rasbash, J
机构
[1] Univ Bristol, Grad Sch Educ, Ctr Multilevel Modelling, Bristol BS8 1JA, Avon, England
[2] Univ London, London Sch Hyg & Trop Med, Med Stat Unit, London WC1E 7HU, England
关键词
bias correction; Kuk's method; Monte-Carlo integration; numerical integration; Robbins-Monro algorithm; simulated maximum likelihood;
D O I
10.1191/1471082X06st106oa
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
Fitting multilevel models to discrete outcome data is problematic because the discrete distribution of the response variable implies an analytically intractable log-likelihood function. Among a number of approximate methods proposed, second-order penalised quasi-likelihood (PQL) is commonly used and is one of the most accurate. Unfortunately even the second-ordcr PQL approximation has been shown to produce estimates biased toward zero in certain circumstances. This bias can be marked especially when the data are sparse. One option to reduce this bias is to use Monte-Carlo simulation. A bootstrap bias correction method proposed by Kuk has been implemented in MLwiN. However, a similar technique based on the Robbins-Monro (RM) algorithm is potentially more efficient. An alternative is to use simulated maximum likelihood (SML), either alone or to refine estimates identified by other methods. In this article, we first compare bias correction using the RM algorithm, Kuk's method and SML. We find that SML performs as efficiently as the other two methods and also yields standard errors of the bias-corrected parameter estimates and an estimate of the log-likelihood at the maximum, with which nested models can be compared. Secondly, using Simulated and real data examples, we compare SML, second-order Laplace approximation (as implemented in HLM), Markov Chain Monte-Carlo (MCMC) (in MLwiN) and numerical integration using adaptive quadrature methods (in Stata's GLLAMM and in SAS's proc NLMIXED). We find that when the data are sparse, the second-order Laplace approximation produces markedly lower parameter estimates, whereas the MCMC method produces estimates that ire noticeably higher than those from the SML and quadrature methods. Although proc NLMIXED is much faster than GLLAMM, it is not designed to fit models of more than two levels. SML produces parameter estimates and log-likelihoods very similar to those from quadrature methods. Further our SML approach extends to handle other link functions, discrete data distributions, non-normal random effects and higher-level models.
引用
收藏
页码:23 / 42
页数:20
相关论文
共 50 条
  • [41] Reversible jump methods for generalised linear models and generalised linear mixed models
    Forster, Jonathan J.
    Gill, Roger C.
    Overstall, Antony M.
    STATISTICS AND COMPUTING, 2012, 22 (01) : 107 - 120
  • [42] Reversible jump methods for generalised linear models and generalised linear mixed models
    Jonathan J. Forster
    Roger C. Gill
    Antony M. Overstall
    Statistics and Computing, 2012, 22 : 107 - 120
  • [43] Maximum likelihood estimation in mixed normal models with two variance components
    Gnot, S
    Stemann, D
    Trenkler, G
    Urbanska-Motyka, A
    STATISTICS, 2002, 36 (04) : 283 - 302
  • [44] Estimation of multinomial logit models with unobserved heterogeneity using maximum simulated likelihood
    Haan, Peter
    Uhlendorff, Arne
    STATA JOURNAL, 2006, 6 (02): : 229 - 245
  • [45] ESTIMATION OF DYNAMIC DISCRETE CHOICE MODELS BY MAXIMUM LIKELIHOOD AND THE SIMULATED METHOD OF MOMENTS
    Eisenhauer, Philipp
    Heckman, James J.
    Mosso, Stefano
    INTERNATIONAL ECONOMIC REVIEW, 2015, 56 (02) : 331 - 357
  • [46] A nonparametric simulated maximum likelihood estimation method
    Fermanian, JD
    Salanié, B
    ECONOMETRIC THEORY, 2004, 20 (04) : 701 - 734
  • [47] Maximum Likelihood for Variance Estimation in High-Dimensional Linear Models
    Dicker, Lee H.
    Erdogdu, Murat A.
    ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 51, 2016, 51 : 159 - 167
  • [48] Maximum nonparametric kernel likelihood estimation for multiplicative linear regression models
    Zhang, Jun
    Lin, Bingqing
    Yang, Yiping
    STATISTICAL PAPERS, 2022, 63 (03) : 885 - 918
  • [49] Trimmed Maximum Likelihood Estimation for Robust Learning in Generalized Linear Models
    Awasthi, Pranjal
    Das, Abhimanyu
    Kong, Weihao
    Sen, Rajat
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [50] Maximum-likelihood estimation for multivariate spatial linear coregionalization models
    Zhang, Hao
    ENVIRONMETRICS, 2007, 18 (02) : 125 - 139