Sample Size Requirements for Popular Classification Algorithms in Tabular Clinical Data: Empirical Study

被引:0
|
作者
Silvey, Scott [1 ]
Liu, Jinze [1 ]
机构
[1] Virginia Commonwealth Univ, Sch Publ Hlth, Dept Biostat, 830 East Main St, Richmond, VA 23219 USA
关键词
medical informatics; machine learning; sample size; research design; decision trees; classification algorithm; clinical research; learning-curve analysis; analysis; analyses; guidelines; ML; decision making; algorithm; curve analysis; dataset; SELECTION; MODELS; AREA;
D O I
10.2196/60231
中图分类号
R19 [保健组织与事业(卫生事业管理)];
学科分类号
摘要
Background: The performance of a classification algorithm eventually reaches a point of diminishing returns, where the additional sample added does not improve the results. Thus, there is a need to determine an optimal sample size that maximizes performance while accounting for computational burden or budgetary concerns. Objective: This study aimed to determine optimal sample sizes and the relationships between sample size and dataset-level characteristics over a variety of binary classification algorithms. Methods: A total of 16 large open-source datasets were collected, each containing a binary clinical outcome. Furthermore, 4 machine learning algorithms were assessed: XGBoost (XGB), random forest (RF), logistic regression (LR), and neural networks (NNs). For each dataset, the cross-validated area under the curve (AUC) was calculated at increasing sample sizes, and learning curves were fit. Sample sizes needed to reach the observed full-dataset AUC minus 2 points (0.02) were calculated from the fitted learning curves and compared across the datasets and algorithms. Dataset-level characteristics, minority class proportion, full-dataset AUC, number of features, type of features, and degree of nonlinearity were examined. Negative binomial regression models were used to quantify relationships between these characteristics and expected sample sizes within each algorithm. A total of 4 multivariable models were constructed, which selected the best-fitting combination of dataset-level characteristics. Results: Among the 16 datasets (full-dataset sample sizes ranging from 70,000-1,000,000), median sample sizes were 9960 (XGB), 3404 (RF), 696 (LR), and 12,298 (NN) to reach AUC stability. For all 4 algorithms, more balanced classes (multiplier: 0.93-0.96 for a 1% increase in minority class proportion) were associated with decreased sample size. Other characteristics varied in importance across algorithms-in general, more features, weaker features, and more complex relationships between the predictors and the response increased expected sample sizes. In multivariable analysis, the top selected predictors were minority class proportion among all 4 algorithms assessed, full-dataset AUC (XGB, RF, and NN), and dataset nonlinearity (XGB, RF, and NN). For LR, the top predictors were minority class proportion, percentage of strong linear features, and number of features. Final multivariable sample size models had high goodness-of-fit, with dataset-level predictors explaining a majority (66.5%-84.5%) Conclusions: The sample sizes needed to reach AUC stability among 4 popular classification algorithms vary by dataset and method and are associated with dataset-level characteristics that can be influenced or estimated before the start of a research
引用
收藏
页数:15
相关论文
共 50 条
  • [31] An empirical study of automated privacy requirements classification in issue reports
    Sangaroonsilp, Pattaraporn
    Choetkiertikul, Morakot
    Dam, Hoa Khanh
    Ghose, Aditya
    AUTOMATED SOFTWARE ENGINEERING, 2023, 30 (02)
  • [32] An empirical study of automated privacy requirements classification in issue reports
    Pattaraporn Sangaroonsilp
    Morakot Choetkiertikul
    Hoa Khanh Dam
    Aditya Ghose
    Automated Software Engineering, 2023, 30
  • [33] Sample size requirements for case-control study designs
    Edwardes M.D.
    BMC Medical Research Methodology, 1 (1) : 1 - 5
  • [34] Empirical Bayes models of Poisson clinical trials and sample size determination
    Zaslavsky, Boris G.
    PHARMACEUTICAL STATISTICS, 2010, 9 (02) : 133 - 141
  • [35] Empirical Study to Evaluate the Performance of Classification Algorithms on Public Datasets
    Bramesh, S. M.
    Kumar, K. M. Anil
    EMERGING RESEARCH IN ELECTRONICS, COMPUTER SCIENCE AND TECHNOLOGY, ICERECT 2018, 2019, 545 : 447 - 455
  • [36] An empirical comparative study of cost-sensitive classification algorithms
    2005, Journal of Pattern Recognition and Artificial Intelligence, Hefei, China (18):
  • [37] Comparative study of classification algorithms for immunosignaturing data
    Muskan Kukreja
    Stephen Albert Johnston
    Phillip Stafford
    BMC Bioinformatics, 13
  • [38] Comparative study of classification algorithms for immunosignaturing data
    Kukreja, Muskan
    Johnston, Stephen Albert
    Stafford, Phillip
    BMC BIOINFORMATICS, 2012, 13
  • [39] Mining of Classification Patterns in Clinical Data through Data Mining Algorithms
    Jacob, Shomona Gracia
    Ramani, R. Geetha
    PROCEEDINGS OF THE 2012 INTERNATIONAL CONFERENCE ON ADVANCES IN COMPUTING, COMMUNICATIONS AND INFORMATICS (ICACCI'12), 2012, : 997 - 1003
  • [40] Sample Size Requirements for Calibrated Approximate Credible Intervals for Proportions in Clinical Trials
    De Santis, Fulvio
    Gubbiotti, Stefania
    INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH, 2021, 18 (02) : 1 - 11