Interpretable AI-assisted clinical decision making (CDM) for dose prescription in radiosurgery of brain metastases

被引:1
|
作者
Cao, Yufeng [1 ]
Kunaprayoon, Dan [1 ]
Ren, Lei [1 ,2 ]
机构
[1] Univ Maryland, Dept Radiat Oncol, Baltimore, MD USA
[2] Univ Maryland, Dept Radiat Oncol, Med Phys Res, Baltimore, MD 21201 USA
关键词
Clinical decision-making; Metastases; Interpretable; Deep learning;
D O I
10.1016/j.radonc.2023.109842
中图分类号
R73 [肿瘤学];
学科分类号
100214 ;
摘要
Purpose: AI modeling physicians' clinical decision-making (CDM) can improve the efficiency and accuracy of clinical practice or serve as a surrogate to provide initial consultations to patients seeking secondary opinions. In this study, we developed an interpretable AI model that predicts dose fractionation for patients receiving radiation therapy for brain metastases with an interpretation of its decision-making process.Materials/Methods: 152 patients with brain metastases treated by radiosurgery from 2017 to 2021 were obtained. CT images and target and organ-at-risk (OAR) contours were extracted. Eight non-image clinical parameters were also extracted and digitized, including age, the number of brain metastasis, ECOG performance status, presence of symptoms, sequencing with surgery (pre-or post-operative radiation therapy), de novo vs. re-treatment, primary cancer type, and metastasis to other sites. 3D convolutional neural networks (CNN) architectures with encoding paths were built based on the CT data and clinical parameters to capture three inputs: (1) Tumor size, shape, and location; (2) The spatial relationship between tumors and OARs; (3) The clinical parameters. The models fuse the features extracted from these three inputs at the decision-making level to learn the input independently to predict dose prescription. Models with different independent paths were developed, including models combining two independent paths (IM-2), three independent paths (IM-3), and ten independent paths (IM-10) at the decision-making level. A class activation score and relative weighting were calculated for each input path during the model prediction to represent the role of each input in the decision-making process, providing an interpretation of the model prediction. The actual prescription in the record was used as ground truth for model training. The model performance was assessed by 19-fold cross-validation, with each fold consisting of randomly selected 128 training, 16 validation, and 8 testing subjects.Result: The dose prescriptions of 152 patient cases included 48 cases with 1 x 24 Gy, 48 cases with 1 x 20-22 Gy, 32 cases with 3 x 9 Gy, and 24 cases with 5 x 6 Gy prescribed by 8 physicians. IM-2 achieved slightly superior performance than IM-3 and IM-10, with 131 (86%) patients classified correctly and 21 (14%) patients misclassified. IM-10 provided the most interpretability with a relative weighting for each input: target (34%), the relationship between target and OAR (35%), ECOG (6%), re-treatment (6%), metastasis to other sites (6%), number of brain metastases (3%), symptomatic (3%), pre/postsurgery (3%), primary cancer type (2%), age (2%), reflecting the importance of the inputs in decision making. The importance ranking of inputs interpreted from the model also matched closely with a physician's own ranking in the decision process.Conclusion: Interpretable CNN models were successfully developed to use CT images and non-image clinical parameters to predict dose prescriptions for brain metastases patients treated by radiosurgery. Models showed high prediction accuracy while providing an interpretation of the decision process, which was validated by the physician. Such interpretability makes the model more transparent, which is crucial for the future clinical adoption of the models in routine practice for CDM assistance.& COPY; 2023 Elsevier B.V. All rights reserved. Radiotherapy and Oncology 187 (2023) 109842
引用
收藏
页数:11
相关论文
共 50 条
  • [21] Modeling Human Trust and Reliance in AI-Assisted Decision Making: A Markovian Approach
    Li, Zhuoyan
    Lu, Zhuoran
    Yin, Ming
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 5, 2023, : 6056 - 6064
  • [22] Human-Centered Evaluation of Explanations in AI-Assisted Decision-Making
    Wang, Xinru
    COMPANION PROCEEDINGS OF 2024 29TH ANNUAL CONFERENCE ON INTELLIGENT USER INTERFACES, IUI 2024 COMPANION, 2024, : 134 - 136
  • [23] AI-Assisted Diagnosis and Decision-Making Method in Developing Countries for Osteosarcoma
    Tang, Haojun
    Huang, Hui
    Liu, Jun
    Zhu, Jun
    Gou, Fangfang
    Wu, Jia
    HEALTHCARE, 2022, 10 (11)
  • [24] How to Evaluate Trust in AI-Assisted Decision Making? A Survey of Empirical Methodologies
    Vereschak O.
    Bailly G.
    Caramiaux B.
    Proceedings of the ACM on Human-Computer Interaction, 2021, 5 (CSCW2)
  • [25] Decoding AI's Nudge: A Unified Framework to Predict Human Behavior in AI-Assisted Decision Making
    Li, Zhuoyan
    Lu, Zhuoran
    Yin, Ming
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 9, 2024, : 10083 - 10091
  • [26] Strategic Adversarial Attacks in AI-assisted Decision Making to Reduce Human Trust and Reliance
    Lu, Zhuoran
    Li, Zhuoyan
    Chiang, Chun-Wei
    Yin, Ming
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 3020 - 3028
  • [27] Are Explanations Helpful? A Comparative Study of the Effects of Explanations in AI-Assisted Decision-Making
    Wang, Xinru
    Yin, Ming
    IUI '21 - 26TH INTERNATIONAL CONFERENCE ON INTELLIGENT USER INTERFACES, 2021, : 318 - 328
  • [28] How was my performance? Exploring the role of anchoring bias in AI-assisted decision making
    Carter, Lemuria
    Liu, Dapeng
    INTERNATIONAL JOURNAL OF INFORMATION MANAGEMENT, 2025, 82
  • [29] Accuracy-Time Tradeoffs in AI-Assisted Decision Making under Time Pressure
    Swaroop, Siddharth
    Bucinca, Zana
    Gajos, Krzysztof Z.
    Doshi-Velez, Finale
    PROCEEDINGS OF 2024 29TH ANNUAL CONFERENCE ON INTELLIGENT USER INTERFACES, IUI 2024, 2024, : 138 - 154
  • [30] AI-assisted diplomatic decision-making during crises-Challenges and opportunities
    Pokhriyal, Neeti
    Koebe, Till
    FRONTIERS IN BIG DATA, 2023, 6