An interpretable approach using hybrid graph networks and explainable AI for intelligent diagnosis recommendations in chronic disease care

被引:8
|
作者
Huang, Mengxing [1 ]
Zhang, Xiu Shi [1 ]
Bhatti, Uzair Aslam [1 ]
Wu, YuanYuan [1 ]
Zhang, Yu [1 ]
Ghadi, Yazeed Yasin [2 ]
机构
[1] Hainan Univ, Sch Informat & Commun Engn, Haikou 570100, Peoples R China
[2] Al Ain Univ, Dept Comp Sci, Al Ain, U Arab Emirates
关键词
Drug Recommendation System; GCFNA; GCFYA; RMSE; SHAP; LIME; MEAN ABSOLUTE ERROR; SYSTEM;
D O I
10.1016/j.bspc.2023.105913
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
With the rapid advancement of modern medical technology and the increasing demand for a higher quality of life there is an emergent requirement for personalized healthcare services. This is particularly pertinent in the sphere of pharmacological recommendations, where the necessity to provide patients with optimal and efficacious medication regimens is paramount. Traditional methodologies in this domain are increasingly seen as insufficient for the needs of contemporary medicine, prompting a shift towards more sophisticated technologies and algorithms. In this study, we addressed this pressing need by developing GCF++ i.e., two graph-based collaborative filtering methods, GCFYA (with attention) and GCFNA (without attention). These methods hold significant promise in revolutionizing how drug recommendations are made, ensuring that patients receive precise and trustworthy medication suggestions tailored to their unique needs and scenarios. To evaluate and compare these algorithms, we introduced three robust metrics: Precision, RMSE (Root Mean Square Error), and Recall. Precision value for GCF-YA is 88 % for hospital dataset, while 85 % for public dataset, similarly, GCF-NA is 77 % for hospital dataset while 78 % for public dataset which is much higher than other traditional methods. Furthermore, as algorithm models become increasingly intricate, transparency and interpretability have gained paramount importance. In response, we incorporated two model interpretation tools, SHAP and LIME, to demystify the decision-making processes behind these algorithms. These tools not only provide clear insights into the basis of recommendation results for both users and developers but also enhance patients' trust and satisfaction with the recommendation system. This study represents a significant step forward in the pursuit of personalized, transparent, and effective healthcare solutions.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] An interpretable approach using hybrid graph networks and explainable AI for intelligent diagnosis recommendations in chronic disease care
    Huang, Mengxing
    Zhang, Xiu Shi
    Bhatti, Uzair Aslam
    Wu, YuanYuan
    Zhang, Yu
    Yasin Ghadi, Yazeed
    Biomedical Signal Processing and Control, 2024, 91
  • [2] Advanced interpretable diagnosis of Alzheimer's disease using SECNN-RF framework with explainable AI
    AbdelAziz, Nabil M.
    Said, Wael
    AbdelHafeez, Mohamed M.
    Ali, Asmaa H.
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2024, 7
  • [3] Explainable AI: A Hybrid Approach to Generate Human-Interpretable Explanation for Deep Learning Prediction
    De, Tanusree
    Giri, Prasenjit
    Mevawala, Ahmeduvesh
    Nemani, Ramyasri
    Deo, Arati
    COMPLEX ADAPTIVE SYSTEMS, 2020, 168 : 40 - 48
  • [4] Multi-Modal Diagnosis of Alzheimer's Disease Using Interpretable Graph Convolutional Networks
    Zhou, Houliang
    He, Lifang
    Chen, Brian Y.
    Shen, Li
    Zhang, Yu
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2025, 44 (01) : 142 - 153
  • [5] Explainable AI Framework for Alzheimer's Diagnosis Using Convolutional Neural Networks
    Mansouri, Dhekra
    Echtioui, Amira
    Khemakhem, Rafik
    Ben Hamida, Ahmed
    2024 IEEE 7TH INTERNATIONAL CONFERENCE ON ADVANCED TECHNOLOGIES, SIGNAL AND IMAGE PROCESSING, ATSIP 2024, 2024, : 93 - 98
  • [6] Interpretable Retinal Disease Classification from OCT Images Using Deep Neural Network and Explainable AI
    Reza, Md Tanzim
    Ahmed, Farzad
    Sharar, Shihab
    Rasel, Annajiat Alim
    PROCEEDINGS OF INTERNATIONAL CONFERENCE ON ELECTRONICS, COMMUNICATIONS AND INFORMATION TECHNOLOGY 2021 (ICECIT 2021), 2021,
  • [7] Personalized Explanations for Early Diagnosis of Alzheimer's Disease Using Explainable Graph Neural Networks with Population Graphs
    Kim, So Yeon
    BIOENGINEERING-BASEL, 2023, 10 (06):
  • [8] Predicting Atrial Fibrillation Relapse Using Bayesian Networks: Explainable AI Approach
    Alves, Joao Miguel
    Matos, Daniel
    Martins, Tiago
    Cavaco, Diogo
    Carmo, Pedro
    Galvao, Pedro
    Costa, Francisco Moscoso
    Morgado, Francisco
    Ferreira, Antonio Miguel
    Freitas, Pedro
    Dias, Claudia Camila
    Rodrigues, Pedro Pereira
    Adragao, Pedro
    JMIR CARDIO, 2025, 9
  • [9] Dual-Graph Learning Convolutional Networks for Interpretable Alzheimer's Disease Diagnosis
    Xiao, Tingsong
    Zeng, Lu
    Shi, Xiaoshuang
    Zhu, Xiaofeng
    Wu, Guorong
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT VIII, 2022, 13438 : 406 - 415
  • [10] Hybrid Approach of Classification of Monkeypox Disease: Integrating Transfer Learning with ViT and Explainable AI
    Siddick, MD Abu Bakar
    Yan, Zhang
    Aziz, Mohammad Tarek
    Rahman, Md Mokshedur
    Mahmud, Tanjim
    Farid, Sha Md
    Uglu, Valisher Sapayev Odilbek
    Irkinovna, Matchanova Barno
    Kuranbaevich, Atayev Shokir
    Hajiev, Ulugbek
    International Journal of Advanced Computer Science and Applications, 2024, 15 (12) : 849 - 861