EXPLAINABLE ARTIFICIAL INTELLIGENCE FOR EARLY PREDICTION OF PRESSURE INJURY RISK

被引:2
|
作者
Alderden, Jenny [1 ]
Johnny, Jace [2 ,3 ]
Brooks, Katie R. [4 ]
Wilson, Andrew [3 ,5 ]
Yap, Tracey L. [4 ]
Zhao, Yunchuan [1 ]
van der Laan, Mark [6 ]
Kennerly, Susan [7 ]
机构
[1] Boise State Univ, Boise, ID USA
[2] Univ Utah, Intermt Med Ctr, Salt Lake City, UT USA
[3] Univ Utah, Salt Lake City, UT USA
[4] Duke Univ, Durham, NC USA
[5] Real World Data Analyt Parexel, Durham, NC USA
[6] Univ Calif Berkeley, Biostat & Stat, Berkeley, CA USA
[7] East Carolina Univ, Greenville, NC USA
关键词
CRITICAL-CARE PATIENTS; BRADEN SCALE; ULCER; VALIDITY;
D O I
10.4037/ajcc2024856
中图分类号
R4 [临床医学];
学科分类号
1002 ; 100602 ;
摘要
Background Hospital-acquired pressure injuries (HAPIs) have a major impact on patient outcomes in intensive care units (ICUs). Effective prevention relies on early and accurate risk assessment. Traditional risk-assessment tools, such as the Braden Scale, often fail to capture ICU-specific factors, limiting their predictive accuracy. Although artificial intelligence models offer improved accuracy, their "black box" nature poses a barrier to clinical adoption. Objective To develop an artificial intelligence-based HAPI risk-assessment model enhanced with an explainable artificial intelligence dashboard to improve interpretability at both the global and individual patient levels. Methods An explainable artificial intelligence approach was used to analyze ICU patient data from the Medical Information Mart for Intensive Care. Predictor variables were restricted to the first 48 hours after ICU admission. Various machine-learning algorithms were evaluated, culminating in an ensemble "super learner" model. The model's performance was quantified using the area under the receiver operating characteristic curve through 5-fold cross-validation. An explainer dashboard was developed (using synthetic data for patient privacy), featuring interactive visualizations for in-depth model interpretation at the global and local levels. Results The final sample comprised 28 395 patients with a 4.9% incidence of HAPIs. The ensemble super learner model performed well (area under curve = 0.80). The explainer dashboard provided global and patient-level interactive visualizations of model predictions, showing each variable's influence on the risk-assessment outcome. Conclusion The model and its dashboard provide clinicians with a transparent, interpretable artificial intelligence- based risk-assessment system for HAPIs that may enable more effective and timely preventive interventions. ( American Journal of Critical Care. 2024;33:373-381)
引用
收藏
页码:373 / 381
页数:9
相关论文
共 50 条
  • [41] Improving stroke risk prediction by integrating XGBoost, optimized principal component analysis, and explainable artificial intelligence
    Mochurad, Lesia
    Babii, Viktoriia
    Boliubash, Yuliia
    Mochurad, Yulianna
    BMC MEDICAL INFORMATICS AND DECISION MAKING, 2025, 25 (01)
  • [42] Explainable Artificial Intelligence for Kids
    Alonso, Jose M.
    PROCEEDINGS OF THE 11TH CONFERENCE OF THE EUROPEAN SOCIETY FOR FUZZY LOGIC AND TECHNOLOGY (EUSFLAT 2019), 2019, 1 : 134 - 141
  • [43] Explainable Artificial Intelligence in education
    Khosravi H.
    Shum S.B.
    Chen G.
    Conati C.
    Tsai Y.-S.
    Kay J.
    Knight S.
    Martinez-Maldonado R.
    Sadiq S.
    Gašević D.
    Computers and Education: Artificial Intelligence, 2022, 3
  • [44] On the Need of an Explainable Artificial Intelligence
    Zanni-Merk, Cecilia
    INFORMATION SYSTEMS ARCHITECTURE AND TECHNOLOGY, ISAT 2019, PT I, 2020, 1050 : 3 - 3
  • [45] Explainable artificial intelligence in pathology
    Klauschen, Frederick
    Dippel, Jonas
    Keyl, Philipp
    Jurmeister, Philipp
    Bockmayr, Michael
    Mock, Andreas
    Buchstab, Oliver
    Alber, Maximilian
    Ruff, Lukas
    Montavon, Gregoire
    Mueller, Klaus-Robert
    PATHOLOGIE, 2024, 45 (02): : 133 - 139
  • [46] Explainable and Trustworthy Artificial Intelligence
    Alonso-Moral, Jose Maria
    Mencar, Corrado
    Ishibuchi, Hisao
    IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE, 2022, 17 (01) : 14 - 15
  • [47] Review of Explainable Artificial Intelligence
    Zhao, Yanyu
    Zhao, Xiaoyong
    Wang, Lei
    Wang, Ningning
    Computer Engineering and Applications, 2023, 59 (14) : 1 - 14
  • [48] Explainable and responsible artificial intelligence
    Christian Meske
    Babak Abedin
    Mathias Klier
    Fethi Rabhi
    Electronic Markets, 2022, 32 : 2103 - 2106
  • [49] Explainable artificial intelligence in ophthalmology
    Tan, Ting Fang
    Dai, Peilun
    Zhang, Xiaoman
    Jin, Liyuan
    Poh, Stanley
    Hong, Dylan
    Lim, Joshua
    Lim, Gilbert
    Teo, Zhen Ling
    Liu, Nan
    Ting, Daniel Shu Wei
    CURRENT OPINION IN OPHTHALMOLOGY, 2023, 34 (05) : 422 - 430
  • [50] A Review of Explainable Artificial Intelligence
    Lin, Kuo-Yi
    Liu, Yuguang
    Li, Li
    Dou, Runliang
    ADVANCES IN PRODUCTION MANAGEMENT SYSTEMS: ARTIFICIAL INTELLIGENCE FOR SUSTAINABLE AND RESILIENT PRODUCTION SYSTEMS, APMS 2021, PT IV, 2021, 633 : 574 - 584