EXPLAINABLE ARTIFICIAL INTELLIGENCE FOR EARLY PREDICTION OF PRESSURE INJURY RISK

被引:2
|
作者
Alderden, Jenny [1 ]
Johnny, Jace [2 ,3 ]
Brooks, Katie R. [4 ]
Wilson, Andrew [3 ,5 ]
Yap, Tracey L. [4 ]
Zhao, Yunchuan [1 ]
van der Laan, Mark [6 ]
Kennerly, Susan [7 ]
机构
[1] Boise State Univ, Boise, ID USA
[2] Univ Utah, Intermt Med Ctr, Salt Lake City, UT USA
[3] Univ Utah, Salt Lake City, UT USA
[4] Duke Univ, Durham, NC USA
[5] Real World Data Analyt Parexel, Durham, NC USA
[6] Univ Calif Berkeley, Biostat & Stat, Berkeley, CA USA
[7] East Carolina Univ, Greenville, NC USA
关键词
CRITICAL-CARE PATIENTS; BRADEN SCALE; ULCER; VALIDITY;
D O I
10.4037/ajcc2024856
中图分类号
R4 [临床医学];
学科分类号
1002 ; 100602 ;
摘要
Background Hospital-acquired pressure injuries (HAPIs) have a major impact on patient outcomes in intensive care units (ICUs). Effective prevention relies on early and accurate risk assessment. Traditional risk-assessment tools, such as the Braden Scale, often fail to capture ICU-specific factors, limiting their predictive accuracy. Although artificial intelligence models offer improved accuracy, their "black box" nature poses a barrier to clinical adoption. Objective To develop an artificial intelligence-based HAPI risk-assessment model enhanced with an explainable artificial intelligence dashboard to improve interpretability at both the global and individual patient levels. Methods An explainable artificial intelligence approach was used to analyze ICU patient data from the Medical Information Mart for Intensive Care. Predictor variables were restricted to the first 48 hours after ICU admission. Various machine-learning algorithms were evaluated, culminating in an ensemble "super learner" model. The model's performance was quantified using the area under the receiver operating characteristic curve through 5-fold cross-validation. An explainer dashboard was developed (using synthetic data for patient privacy), featuring interactive visualizations for in-depth model interpretation at the global and local levels. Results The final sample comprised 28 395 patients with a 4.9% incidence of HAPIs. The ensemble super learner model performed well (area under curve = 0.80). The explainer dashboard provided global and patient-level interactive visualizations of model predictions, showing each variable's influence on the risk-assessment outcome. Conclusion The model and its dashboard provide clinicians with a transparent, interpretable artificial intelligence- based risk-assessment system for HAPIs that may enable more effective and timely preventive interventions. ( American Journal of Critical Care. 2024;33:373-381)
引用
收藏
页码:373 / 381
页数:9
相关论文
共 50 条
  • [31] An Explainable Artificial Intelligence Approach for Remaining Useful Life Prediction
    Youness, Genane
    Aalah, Adam
    AEROSPACE, 2023, 10 (05)
  • [32] Advances in Machine Learning and Explainable Artificial Intelligence for Depression Prediction
    Byeon, Haewon
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2023, 14 (06) : 520 - 526
  • [33] Explainable Artificial Intelligence for Protein Function Prediction: A Perspective View
    Le, Nguyen Quoc Khanh
    CURRENT BIOINFORMATICS, 2023, 18 (03) : 205 - 207
  • [34] Explainable artificial intelligence prediction of defect characterization in composite materials
    Daghigh, Vahid
    Ramezani, Somayeh Bakhtiari
    Daghigh, Hamid
    Lacy, Thomas E., Jr.
    COMPOSITES SCIENCE AND TECHNOLOGY, 2024, 256
  • [35] Toward Explainable Artificial Intelligence for Early Anticipation of Traffic Accidents
    Karim, Muhammad Monjurul
    Li, Yu
    Qin, Ruwen
    TRANSPORTATION RESEARCH RECORD, 2022, 2676 (06) : 743 - 755
  • [36] Early prediction of Autism Spectrum Disorders through interaction analysis in home videos and explainable artificial intelligence
    Paolucci, Claudio
    Giorgini, Federica
    Scheda, Riccardo
    Alessi, Flavio Valerio
    Diciotti, Stefano
    COMPUTERS IN HUMAN BEHAVIOR, 2023, 148
  • [37] Explainable Artificial Intelligence Approach for the Early Prediction of Ventilator Support and Mortality in COVID-19 Patients
    Aslam, Nida
    COMPUTATION, 2022, 10 (03)
  • [38] Analysis and evaluation of explainable artificial intelligence on suicide risk assessment
    Hao Tang
    Aref Miri Rekavandi
    Dharjinder Rooprai
    Girish Dwivedi
    Frank M. Sanfilippo
    Farid Boussaid
    Mohammed Bennamoun
    Scientific Reports, 14
  • [39] Analysis and evaluation of explainable artificial intelligence on suicide risk assessment
    Tang, Hao
    Miri Rekavandi, Aref
    Rooprai, Dharjinder
    Dwivedi, Girish
    Sanfilippo, Frank M.
    Boussaid, Farid
    Bennamoun, Mohammed
    SCIENTIFIC REPORTS, 2024, 14 (01)
  • [40] Explainable artificial intelligence model for mortality risk prediction in the intensive care unit: a derivation and validation study
    Hu, Chang
    Gao, Chao
    Li, Tianlong
    Liu, Chang
    Peng, Zhiyong
    POSTGRADUATE MEDICAL JOURNAL, 2024, 100 (1182) : 219 - 227