Multivariable prognostic models: Issues in developing models, evaluating assumptions and adequacy, and measuring and reducing errors

被引:32
|
作者
Harrell, FE [1 ]
Lee, KL [1 ]
Mark, DB [1 ]
机构
[1] DUKE UNIV, MED CTR, DIV CARDIOL, DURHAM, NC 27710 USA
关键词
D O I
10.1002/(SICI)1097-0258(19960229)15:4<361::AID-SIM168>3.0.CO;2-4
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Multivariable regression models are powerful tools that are used frequently in studies of clinical outcomes. These models can use a mixture of categorical and continuous variables and can handle partially observed (censored) responses. However, uncritical application of modelling techniques can result in models that poorly fit the dataset at hand, or, even more likely, inaccurately predict outcomes on new subjects. One must know how to measure qualities of a model's fit in order to avoid poorly fitted or overfitted models. Measurement of predictive accuracy can be difficult for survival time data in the presence of censoring. We discuss an easily interpretable index of predictive discrimination as well as methods for assessing calibration of predicted survival probabilities. Both types of predictive accuracy should be unbiasedly validated using bootstrapping or cross-validation, before using predictions in a new data series. We discuss some of the hazards of poorly fitted and overfitted regression models and present one modelling strategy that avoids many of the problems discussed. The methods described are applicable to all regression models, but are particularly needed for binary, ordinal, and time-to-event outcomes. Methods are illustrated with a survival analysis in prostate cancer using Cox regression.
引用
收藏
页码:361 / 387
页数:27
相关论文
共 50 条
  • [31] Systematic Review of Multivariable Prognostic Models for Mild Traumatic Brain Injury
    Silverberg, Noah D.
    Gardner, Andrew J.
    Brubacher, Jeffrey R.
    Panenka, William J.
    Li, Jun Jian
    Iverson, Grant L.
    JOURNAL OF NEUROTRAUMA, 2015, 32 (08) : 517 - 526
  • [32] Bootstrapping multivariate portmanteau tests for vector autoregressive models with weak assumptions on errors
    Li, Muyi
    Zhang, Yanfen
    COMPUTATIONAL STATISTICS & DATA ANALYSIS, 2022, 165
  • [33] LINEAR-MODELS WITH AUTOCORRELATED ERRORS - STRUCTURAL IDENTIFIABILITY IN THE ABSENCE OF MINIMALITY ASSUMPTIONS
    DEISTLER, M
    SCHRADER, J
    ECONOMETRICA, 1979, 47 (02) : 495 - 504
  • [34] Performance Issues in Evaluating Models and Designing Simulation Algorithms
    Ewald, Roland
    Himmelspach, Jan
    Jeschke, Matthias
    Leye, Stefan
    Uhrmacher, Adelinde M.
    2009 INTERNATIONAL WORKSHOP ON HIGH PERFORMANCE COMPUTATIONAL SYSTEMS BIOLOGY, PROCEEDINGS, 2009, : 71 - 80
  • [35] Theoretical and practical issues in evaluating the quality of conceptual models
    Moody, DL
    ADVANCED CONCEPTUAL MODELING TECHNIQUES, 2003, 2784 : 241 - 242
  • [36] Models for Online Computing in Developing Countries: Issues and Deliberations
    Jolliffe, Bob
    Poppe, Olav
    Adaletey, Denis
    Braa, Jorn
    INFORMATION TECHNOLOGY FOR DEVELOPMENT, 2015, 21 (01) : 151 - 161
  • [37] PRACTICAL ISSUES IN DEVELOPING ECONOMIC MODELS FOR TARGETED TREATMENTS
    Wlodarczyk, J.
    Kemp, D.
    Leadbitter, S.
    VALUE IN HEALTH, 2015, 18 (03) : A22 - A22
  • [38] MEASURING ERRORS IN APPROXIMATING STOCHASTIC COMBAT MODELS WITH DIFFERENTIAL EQUATIONS
    FARRELL, RL
    OPERATIONS RESEARCH, 1975, 23 : B289 - B289
  • [39] Developing and evaluating predictive conveyor belt wear models
    Webb, Callum
    Sikorska, Joanna
    Khan, Ramzan Nazim
    Hodkiewicz, Melinda
    DATA-CENTRIC ENGINEERING, 2020, 1 (1-2):
  • [40] A Suite of Rules for Developing and Evaluating Software Quality Models
    AL-Badareen, Anas Bassam
    Desharnais, Jean-Marc
    Abran, Alain
    SOFTWARE MEASUREMENT (IWSM-MENSURA 2015), 2015, 230 : 1 - 13