An Explainable AI based Clinical Assistance Model for Identifying Patients with the Onset of Sepsis

被引:4
|
作者
Chakraborty, Snehashis [1 ]
Kumar, Komal [1 ]
Reddy, Balakrishna Pailla [2 ]
Meena, Tanushree [1 ]
Roy, Sudipta [1 ]
机构
[1] Jio Inst, Artificial Intelligence & Data Sci, Navi Mumbai 410206, India
[2] Reliance Jio, Artificial Intelligence Ctr Excellence AICoE, Hyderabad, India
关键词
Healthcare; XAI; Sepsis Prediction; Autoencoders; MORTALITY;
D O I
10.1109/IRI58017.2023.00059
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The high mortality rate of sepsis, especially in Intensive Care Unit (ICU) makes it third-highest mortality disease globally. The treatment of sepsis is also time consuming and depends on multi-parametric tests, hence early identification of patients with sepsis becomes crucial. The recent rise in the development of Artificial Intelligence (AI) based models, especially in early prediction of sepsis, have improved the patient outcome. However, drawbacks like low sensitivity, use of excess features that leads to overfitting, and lack of interpretability limit their ability to be used in a clinical setting. So, in this research we have developed a smart, explainable and a highly accurate AI based model (called XAutoNet) that provides quick and early prediction of sepsis with a minimal number of features as input. An application based novel convolutional neural network (CNN) based autoencoder is also implemented that improves the performance of XAutoNet by dimensional reduction. Finally, to unbox the "Black Box" nature of these models, Gradient based Class Activation Map (GradCAM) and SHapley Additive exPlanations (SHAP) are implemented to provide interpretability of autoencoder and XAutoNet in the form of visualization graphs to assist clinicians in diagnosis and treatment.
引用
收藏
页码:297 / 302
页数:6
相关论文
共 50 条
  • [1] Explainable AI for Fair Sepsis Mortality Predictive Model
    Chang, Chia-Hsuan
    Wang, Xiaoyang
    Yang, Christopher C.
    ARTIFICIAL INTELLIGENCE IN MEDICINE, PT II, AIME 2024, 2024, 14845 : 267 - 276
  • [2] Unleashing the power of explainable AI: sepsis sentinel's clinical assistant for early sepsis identification
    Chakraborty, Snehashis
    Kumar, Komal
    Tadepalli, Kalyan
    Pailla, Balakrishna Reddy
    Roy, Sudipta
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (19) : 57613 - 57641
  • [3] Explainable AI Enabled Infant Mortality Prediction Based on Neonatal Sepsis
    Shaw, Priti
    Pachpor, Kaustubh
    Sankaranarayanan, Suresh
    COMPUTER SYSTEMS SCIENCE AND ENGINEERING, 2023, 44 (01): : 311 - 325
  • [4] Hybrid AI based stroke characterization with explainable model
    Patil, R.
    Shreya, A.
    Maulik, P.
    Chaudhury, S.
    JOURNAL OF THE NEUROLOGICAL SCIENCES, 2019, 405
  • [5] Explainable AI model for PDFMal detection based on gradient boosting model
    Elattar, Mona
    Younes, Ahmed
    Gad, Ibrahim
    Elkabani, Islam
    Neural Computing and Applications, 2024, 36 (34) : 21607 - 21622
  • [6] An Explainable AI Framework for Treatment Failure Model for Oncology Patients
    Zaidi, Syed Hamail Hussain
    Hashmat, Bilal
    Farooq, Muddassar
    EXPLAINABLE ARTIFICIAL INTELLIGENCE AND PROCESS MINING APPLICATIONS FOR HEALTHCARE, XAI-HEALTHCARE 2023 & PM4H 2023, 2024, 2020 : 25 - 35
  • [7] Towards an Explainable Model for Sepsis Detection Based on Sensitivity Analysis
    Chen, M.
    Hernandez, A.
    IRBM, 2022, 43 (01) : 75 - 86
  • [8] An Explainable AI-Based Fault Diagnosis Model for Bearings
    Hasan, Md Junayed
    Sohaib, Muhammad
    Kim, Jong-Myon
    SENSORS, 2021, 21 (12)
  • [9] Explainable AI-Based Interface System for Weather Forecasting Model
    Kim, Soyeon
    Choi, Junho
    Choi, Yeji
    Lee, Subeen
    Stitsyuk, Artyom
    Park, Minkyoung
    Jeong, Seongyeop
    Baek, You-Hyun
    Choi, Jaesik
    HCI INTERNATIONAL 2023 LATE BREAKING PAPERS, HCII 2023, PT VI, 2023, 14059 : 101 - 119
  • [10] EFFICACY OF A SEPSIS CLINICAL DECISION SUPPORT SYSTEM IN IDENTIFYING PATIENTS WITH SEPSIS IN THE EMERGENCY DEPARTMENT
    Hou, Yueh-Tseng
    Wu, Meng-Yu
    Chen, Yu-Long
    Liu, Tzu-Hung
    Cheng, Ruei-Ting
    Hsu, Pei-Lan
    Chao, An-Kuo
    Huang, Ching-Chieh
    Cheng, Fei-Wen
    Lai, Po-Lin
    Wu, I-Feng
    Yiang, Giou-Teng
    SHOCK, 2024, 62 (04): : 480 - 487