EXPLAINABLE ARTIFICIAL INTELLIGENCE FOR EARLY PREDICTION OF PRESSURE INJURY RISK

被引:2
|
作者
Alderden, Jenny [1 ]
Johnny, Jace [2 ,3 ]
Brooks, Katie R. [4 ]
Wilson, Andrew [3 ,5 ]
Yap, Tracey L. [4 ]
Zhao, Yunchuan [1 ]
van der Laan, Mark [6 ]
Kennerly, Susan [7 ]
机构
[1] Boise State Univ, Boise, ID USA
[2] Univ Utah, Intermt Med Ctr, Salt Lake City, UT USA
[3] Univ Utah, Salt Lake City, UT USA
[4] Duke Univ, Durham, NC USA
[5] Real World Data Analyt Parexel, Durham, NC USA
[6] Univ Calif Berkeley, Biostat & Stat, Berkeley, CA USA
[7] East Carolina Univ, Greenville, NC USA
关键词
CRITICAL-CARE PATIENTS; BRADEN SCALE; ULCER; VALIDITY;
D O I
10.4037/ajcc2024856
中图分类号
R4 [临床医学];
学科分类号
1002 ; 100602 ;
摘要
Background Hospital-acquired pressure injuries (HAPIs) have a major impact on patient outcomes in intensive care units (ICUs). Effective prevention relies on early and accurate risk assessment. Traditional risk-assessment tools, such as the Braden Scale, often fail to capture ICU-specific factors, limiting their predictive accuracy. Although artificial intelligence models offer improved accuracy, their "black box" nature poses a barrier to clinical adoption. Objective To develop an artificial intelligence-based HAPI risk-assessment model enhanced with an explainable artificial intelligence dashboard to improve interpretability at both the global and individual patient levels. Methods An explainable artificial intelligence approach was used to analyze ICU patient data from the Medical Information Mart for Intensive Care. Predictor variables were restricted to the first 48 hours after ICU admission. Various machine-learning algorithms were evaluated, culminating in an ensemble "super learner" model. The model's performance was quantified using the area under the receiver operating characteristic curve through 5-fold cross-validation. An explainer dashboard was developed (using synthetic data for patient privacy), featuring interactive visualizations for in-depth model interpretation at the global and local levels. Results The final sample comprised 28 395 patients with a 4.9% incidence of HAPIs. The ensemble super learner model performed well (area under curve = 0.80). The explainer dashboard provided global and patient-level interactive visualizations of model predictions, showing each variable's influence on the risk-assessment outcome. Conclusion The model and its dashboard provide clinicians with a transparent, interpretable artificial intelligence- based risk-assessment system for HAPIs that may enable more effective and timely preventive interventions. ( American Journal of Critical Care. 2024;33:373-381)
引用
收藏
页码:373 / 381
页数:9
相关论文
共 50 条
  • [21] Explainable artificial intelligence
    Wickramasinghe, Chathurika S.
    Marino, Daniel
    Amarasinghe, Kasun
    FRONTIERS IN COMPUTER SCIENCE, 2023, 5
  • [22] Explainable artificial intelligence for machine learning prediction of bandgap energies
    Masuda, Taichi
    Tanabe, Katsuaki
    JOURNAL OF APPLIED PHYSICS, 2024, 136 (17)
  • [23] A decision support system for osteoporosis risk prediction using machine learning and explainable artificial intelligence
    Khanna, Varada Vivek
    Chadaga, Krishnaraj
    Sampathila, Niranjana
    Chadaga, Rajagopala
    Prabhu, Srikanth
    Swathi, K. S.
    Jagdale, Aditya S.
    Bhat, Devadas
    HELIYON, 2023, 9 (12)
  • [24] Explainable artificial intelligence model for the prediction of undrained shear strength
    Nguyen, Ho-Hong-Duy
    Nguyen, Thanh-Nhan
    Phan, Thi-Anh-Thu
    Huynh, Ngoc-Thi
    Huynh, Quoc-Dat
    Trieu, Tan-Tai
    THEORETICAL AND APPLIED MECHANICS LETTERS, 2025, 15 (03)
  • [25] An explainable artificial intelligence model for prediction of high-risk non-alcoholic steatohepatitis
    Njei, Basile
    Osta, Eri
    Njei, Nelvis
    Lim, Joseph
    JOURNAL OF HEPATOLOGY, 2023, 78 : S856 - S857
  • [26] Explainable artificial intelligence for assault sentence prediction in New Zealand
    Rodger, Harry
    Lensen, Andrew
    Betkier, Marcin
    JOURNAL OF THE ROYAL SOCIETY OF NEW ZEALAND, 2023, 53 (01) : 133 - 147
  • [27] Soil temperature prediction based on explainable artificial intelligence and LSTM
    Geng, Qingtian
    Wang, Leilei
    Li, Qingliang
    FRONTIERS IN ENVIRONMENTAL SCIENCE, 2024, 12
  • [28] An Explainable Artificial Intelligence Methodology for Hard Disk Fault Prediction
    Galli, Antonio
    Moscato, Vincenzo
    Sperli, Giancarlo
    De Santo, Aniello
    DATABASE AND EXPERT SYSTEMS APPLICATIONS, DEXA 2020, PT I, 2020, 12391 : 403 - 413
  • [29] Explainable Artificial Intelligence for Prediction of Diabetes using Stacking Classifier
    Devi, Aruna B.
    Karthik, N.
    10TH INTERNATIONAL CONFERENCE ON ELECTRONICS, COMPUTING AND COMMUNICATION TECHNOLOGIES, CONECCT 2024, 2024,
  • [30] Explainable Artificial Intelligence for Cotton Yield Prediction With Multisource Data
    Celik, Mehmet Furkan
    Isik, Mustafa Serkan
    Taskin, Gulsen
    Erten, Esra
    Camps-Valls, Gustau
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2023, 20