Beyond black-box models: explainable AI for embryo ploidy prediction and patient-centric consultation

被引:2
|
作者
Luong, Thi-My-Trang [1 ,2 ,3 ]
Ho, Nguyen-Tuong [3 ,4 ]
Hwu, Yuh-Ming [3 ]
Lin, Shyr-Yeu [3 ]
Ho, Jason Yen-Ping [3 ]
Wang, Ruey-Sheng [3 ]
Lee, Yi-Xuan [3 ]
Tan, Shun-Jen [3 ]
Lee, Yi-Rong [3 ]
Huang, Yung-Ling [3 ]
Hsu, Yi-Ching [3 ]
Le, Nguyen-Quoc-Khanh [2 ,5 ,6 ,7 ]
Tzeng, Chii-Ruey [3 ]
机构
[1] Taipei Med Univ, Coll Med, Int Master Program Med, Taipei, Taiwan
[2] Taipei Med Univ, AIBioMed Res Grp, Taipei, Taiwan
[3] Taipei Fertil Ctr, Taipei, Taiwan
[4] My Duc Hosp, IVFMD, Ho Chi Minh, Vietnam
[5] Taipei Med Univ, Coll Med, Profess Master Program Artificial Intelligence Med, Taipei, Taiwan
[6] Taipei Med Univ, Res Ctr Artificial Intelligence Med, Taipei, Taiwan
[7] Taipei Med Univ Hosp, Translat Imaging Res Ctr, Taipei, Taiwan
关键词
Embryo selection; Ploidy prediction; Explainable artificial intelligence; Machine learning; Preimplantation genetic testing; AGE;
D O I
10.1007/s10815-024-03178-7
中图分类号
Q3 [遗传学];
学科分类号
071007 ; 090102 ;
摘要
PurposeTo determine if an explainable artificial intelligence (XAI) model enhances the accuracy and transparency of predicting embryo ploidy status based on embryonic characteristics and clinical data.MethodsThis retrospective study utilized a dataset of 1908 blastocyst embryos. The dataset includes ploidy status, morphokinetic features, morphology grades, and 11 clinical variables. Six machine learning (ML) models including Random Forest (RF), Linear Discriminant Analysis (LDA), Logistic Regression (LR), Support Vector Machine (SVM), AdaBoost (ADA), and Light Gradient-Boosting Machine (LGBM) were trained to predict ploidy status probabilities across three distinct datasets: high-grade embryos (HGE, n = 1107), low-grade embryos (LGE, n = 364), and all-grade embryos (AGE, n = 1471). The model's performance was interpreted using XAI, including SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) techniques.ResultsThe mean maternal age was 38.5 +/- 3.85 years. The Random Forest (RF) model exhibited superior performance compared to the other five ML models, achieving an accuracy of 0.749 and an AUC of 0.808 for AGE. In the external test set, the RF model achieved an accuracy of 0.714 and an AUC of 0.750 (95% CI, 0.702-0.796). SHAP's feature impact analysis highlighted that maternal age, paternal age, time to blastocyst (tB), and day 5 morphology grade significantly impacted the predictive model. In addition, LIME offered specific case-ploidy prediction probabilities, revealing the model's assigned values for each variable within a finite range.ConclusionThe model highlights the potential of using XAI algorithms to enhance ploidy prediction, optimize embryo selection as patient-centric consultation, and provides reliability and transparent insights into the decision-making process.
引用
收藏
页码:2349 / 2358
页数:10
相关论文
共 50 条
  • [1] Explainable AI: To Reveal the Logic of Black-Box Models
    Chinu, Urvashi
    Bansal, Urvashi
    NEW GENERATION COMPUTING, 2024, 42 (01) : 53 - 87
  • [2] Explainable AI: To Reveal the Logic of Black-Box Models
    Chinu
    Bansal, Urvashi
    New Generation Computing, 42 (01): : 53 - 87
  • [3] Explainable AI: To Reveal the Logic of Black-Box Models
    Urvashi Chinu
    New Generation Computing, 2024, 42 : 53 - 87
  • [4] Human-Centered Explainable AI (HCXAI): Beyond Opening the Black-Box of AI
    Ehsan, Upol
    Wintersberger, Philipp
    Liao, Q. Vera
    Watkins, Elizabeth Anne
    Manger, Carina
    Daume, Hal, III
    Riener, Andreas
    Riedl, Mark O.
    EXTENDED ABSTRACTS OF THE 2022 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2022, 2022,
  • [5] The four dimensions of contestable AI diagnostics - A patient-centric approach to explainable AI
    Ploug, Thomas
    Holm, Soren
    ARTIFICIAL INTELLIGENCE IN MEDICINE, 2020, 107
  • [6] Explainable Debugger for Black-box Machine Learning Models
    Rasouli, Peyman
    Yu, Ingrid Chieh
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [7] Testing Framework for Black-box AI Models
    Aggarwal, Aniya
    Shaikh, Samiulla
    Hans, Sandeep
    Haldar, Swastik
    Ananthanarayanan, Rema
    Saha, Diptikalyan
    2021 IEEE/ACM 43RD INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING: COMPANION PROCEEDINGS (ICSE-COMPANION 2021), 2021, : 81 - 84
  • [8] Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence
    Hassija, Vikas
    Chamola, Vinay
    Mahapatra, Atmesh
    Singal, Abhinandan
    Goel, Divyansh
    Huang, Kaizhu
    Scardapane, Simone
    Spinelli, Indro
    Mahmud, Mufti
    Hussain, Amir
    COGNITIVE COMPUTATION, 2024, 16 (01) : 45 - 74
  • [9] Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence
    Vikas Hassija
    Vinay Chamola
    Atmesh Mahapatra
    Abhinandan Singal
    Divyansh Goel
    Kaizhu Huang
    Simone Scardapane
    Indro Spinelli
    Mufti Mahmud
    Amir Hussain
    Cognitive Computation, 2024, 16 : 45 - 74
  • [10] Unveiling the Black-Box: Leveraging Explainable AI for FPGA Design Space Optimization
    Seo, Jaemin
    Park, Sejin
    Kang, Seokhyeong
    2024 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION, DATE, 2024,