Benchmarking ChatGPT-4 on a radiation oncology in-training exam and Red Journal Gray Zone cases: potentials and challenges for ai-assisted medical education and decision making in radiation oncology

被引:36
|
作者
Huang, Yixing [1 ,2 ]
Gomaa, Ahmed [1 ,2 ]
Semrau, Sabine [1 ,2 ]
Haderlein, Marlen [1 ,2 ]
Lettmaier, Sebastian [1 ,2 ]
Weissmann, Thomas [1 ,2 ]
Grigo, Johanna [1 ,2 ]
Tkhayat, Hassen Ben [1 ,3 ]
Frey, Benjamin [1 ,2 ]
Gaipl, Udo [1 ,2 ]
Distel, Luitpold [1 ,2 ]
Maier, Andreas [3 ]
Fietkau, Rainer [1 ,2 ]
Bert, Christoph [1 ,2 ]
Putz, Florian [1 ,2 ]
机构
[1] Friedrich Alexander Univ Erlangen Nurnberg, Univ Hosp Erlangen, Dept Radiat Oncol, Erlangen, Germany
[2] Comprehens Canc Ctr Erlangen EMN CCC ER EMN, Erlangen, Germany
[3] Friedrich Alexander Univ Erlangen Nurnberg, Pattern Recognit Lab, Erlangen, Germany
来源
FRONTIERS IN ONCOLOGY | 2023年 / 13卷
关键词
large language model; radiotherapy; natural language processing; artificial intelligence; Gray Zone; clinical decision support (CDS); CANCER; THERAPY;
D O I
10.3389/fonc.2023.1265024
中图分类号
R73 [肿瘤学];
学科分类号
100214 ;
摘要
PurposeThe potential of large language models in medicine for education and decision-making purposes has been demonstrated as they have achieved decent scores on medical exams such as the United States Medical Licensing Exam (USMLE) and the MedQA exam. This work aims to evaluate the performance of ChatGPT-4 in the specialized field of radiation oncology.MethodsThe 38th American College of Radiology (ACR) radiation oncology in-training (TXIT) exam and the 2022 Red Journal Gray Zone cases are used to benchmark the performance of ChatGPT-4. The TXIT exam contains 300 questions covering various topics of radiation oncology. The 2022 Gray Zone collection contains 15 complex clinical cases.ResultsFor the TXIT exam, ChatGPT-3.5 and ChatGPT-4 have achieved the scores of 62.05% and 78.77%, respectively, highlighting the advantage of the latest ChatGPT-4 model. Based on the TXIT exam, ChatGPT-4's strong and weak areas in radiation oncology are identified to some extent. Specifically, ChatGPT-4 demonstrates better knowledge of statistics, CNS & eye, pediatrics, biology, and physics than knowledge of bone & soft tissue and gynecology, as per the ACR knowledge domain. Regarding clinical care paths, ChatGPT-4 performs better in diagnosis, prognosis, and toxicity than brachytherapy and dosimetry. It lacks proficiency in in-depth details of clinical trials. For the Gray Zone cases, ChatGPT-4 is able to suggest a personalized treatment approach to each case with high correctness and comprehensiveness. Importantly, it provides novel treatment aspects for many cases, which are not suggested by any human experts.ConclusionBoth evaluations demonstrate the potential of ChatGPT-4 in medical education for the general public and cancer patients, as well as the potential to aid clinical decision-making, while acknowledging its limitations in certain domains. Owing to the risk of hallucinations, it is essential to verify the content generated by models such as ChatGPT for accuracy.
引用
收藏
页数:13
相关论文
empty
未找到相关数据