An interpretable approach using hybrid graph networks and explainable AI for intelligent diagnosis recommendations in chronic disease care

被引:8
|
作者
Huang, Mengxing [1 ]
Zhang, Xiu Shi [1 ]
Bhatti, Uzair Aslam [1 ]
Wu, YuanYuan [1 ]
Zhang, Yu [1 ]
Ghadi, Yazeed Yasin [2 ]
机构
[1] Hainan Univ, Sch Informat & Commun Engn, Haikou 570100, Peoples R China
[2] Al Ain Univ, Dept Comp Sci, Al Ain, U Arab Emirates
关键词
Drug Recommendation System; GCFNA; GCFYA; RMSE; SHAP; LIME; MEAN ABSOLUTE ERROR; SYSTEM;
D O I
10.1016/j.bspc.2023.105913
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
With the rapid advancement of modern medical technology and the increasing demand for a higher quality of life there is an emergent requirement for personalized healthcare services. This is particularly pertinent in the sphere of pharmacological recommendations, where the necessity to provide patients with optimal and efficacious medication regimens is paramount. Traditional methodologies in this domain are increasingly seen as insufficient for the needs of contemporary medicine, prompting a shift towards more sophisticated technologies and algorithms. In this study, we addressed this pressing need by developing GCF++ i.e., two graph-based collaborative filtering methods, GCFYA (with attention) and GCFNA (without attention). These methods hold significant promise in revolutionizing how drug recommendations are made, ensuring that patients receive precise and trustworthy medication suggestions tailored to their unique needs and scenarios. To evaluate and compare these algorithms, we introduced three robust metrics: Precision, RMSE (Root Mean Square Error), and Recall. Precision value for GCF-YA is 88 % for hospital dataset, while 85 % for public dataset, similarly, GCF-NA is 77 % for hospital dataset while 78 % for public dataset which is much higher than other traditional methods. Furthermore, as algorithm models become increasingly intricate, transparency and interpretability have gained paramount importance. In response, we incorporated two model interpretation tools, SHAP and LIME, to demystify the decision-making processes behind these algorithms. These tools not only provide clear insights into the basis of recommendation results for both users and developers but also enhance patients' trust and satisfaction with the recommendation system. This study represents a significant step forward in the pursuit of personalized, transparent, and effective healthcare solutions.
引用
收藏
页数:15
相关论文
共 50 条
  • [31] Integrative gene expression analysis for the diagnosis of Parkinson's disease using machine learning and explainable AI
    Bhandari, Nikita
    Walambe, Rahee
    Kotecha, Ketan
    Kaliya, Mehul
    COMPUTERS IN BIOLOGY AND MEDICINE, 2023, 163
  • [32] Adaptive Gated Graph Convolutional Network for Explainable Diagnosis of Alzheimer's Disease Using EEG Data
    Klepl D.
    He F.
    Wu M.
    Blackburn D.J.
    Sarrigiannis P.
    IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2023, 31 : 3978 - 3987
  • [33] Securing online integrity: a hybrid approach to deepfake detection and removal using Explainable AI and Adversarial Robustness Training
    Maheshwari, R. Uma
    Paulchamy, B.
    AUTOMATIKA, 2024, 65 (04) : 1517 - 1532
  • [34] Predicting the Conversion from Mild Cognitive Impairment to Alzheimer's Disease Using an Explainable AI Approach
    Grammenos, Gerasimos
    Vrahatis, Aristidis G.
    Vlamos, Panagiotis
    Palejev, Dean
    Exarchos, Themis
    INFORMATION, 2024, 15 (05)
  • [35] An Intelligent System for Parkinson's Diagnosis Using Hybrid Feature Selection Approach
    Lamba, Rohit
    Gulati, Tarun
    Jain, Anurag
    INTERNATIONAL JOURNAL OF SOFTWARE INNOVATION, 2022, 10 (01)
  • [36] Intelligent fault diagnosis of rolling bearings in strongly noisy environments using graph convolutional networks
    Wei, Lunpan
    Peng, Xiuyan
    Cao, Yunpeng
    INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, 2024,
  • [37] Interpretable Differential Diagnosis of Non-COVID Viral Pneumonia, Lung Opacity and COVID-19 Using Tuned Transfer Learning and Explainable AI
    Islam, Md. Nazmul
    Alam, Md. Golam Rabiul
    Apon, Tasnim Sakib
    Uddin, Md. Zia
    Allheeib, Nasser
    Menshawi, Alaa
    Hassan, Mohammad Mehedi
    HEALTHCARE, 2023, 11 (03)
  • [38] Intelligent diagnosis of Kawasaki disease from real-world data using interpretable machine learning models
    Duan, Yifan
    Wang, Ruiqi
    Huang, Zhilin
    Chen, Haoran
    Tang, Mingkun
    Zhou, Jiayin
    Hu, Zhengyong
    Hu, Wanfei
    Chen, Zhenli
    Qian, Qing
    Wang, Haolin
    HELLENIC JOURNAL OF CARDIOLOGY, 2025, 81 : 38 - 48
  • [39] Tooth.AI: Intelligent Dental Disease Diagnosis and Treatment Support Using Semantic Network
    Gabbar, Hossam A.
    Chahid, Abderrazak
    Khan, Md. Jamiul Alam
    Grace-Adegboro, Oluwabukola
    Samson, Matthew Immanuel
    IEEE SYSTEMS MAN AND CYBERNETICS MAGAZINE, 2023, 9 (03): : 19 - 27
  • [40] Robust diagnosis to measurement uncertainties using bond graph approach: Application to intelligent autonomous vehicle
    Touati, Youcef
    Merzouki, Rochdi
    Bouamama, Belkacem Ould
    MECHATRONICS, 2012, 22 (08) : 1148 - 1160