Harnessing ChatGPT and GPT-4 for evaluating the rheumatology questions of the Spanish access exam to specialized medical training

被引:26
|
作者
Madrid-Garcia, Alfredo [1 ]
Rosales-Rosado, Zulema [1 ]
Freites-Nunez, Dalifer [1 ]
Perez-Sancristobal, Ines [1 ]
Pato-Cour, Esperanza [1 ]
Plasencia-Rodriguez, Chamaida [2 ]
Cabeza-Osorio, Luis [3 ,4 ]
Abasolo-Alcazar, Lydia [1 ]
Leon-Mateos, Leticia [1 ]
Fernandez-Gutierrez, Benjamin [1 ,5 ]
Rodriguez-Rodriguez, Luis [1 ]
机构
[1] Inst Invest Sanitaria Hosp Clin San Carlos IdISSC, Hosp Clin San Carlos, Grp Patol Musculoesquelet, Prof Martin Lagos S-N, Madrid 28040, Spain
[2] Hosp Univ La Paz IdiPaz, Reumatol, Paseo Castellana,261, Madrid 28046, Spain
[3] Hosp Univ Henares, Med Interna, Ave Marie Curie,0, Madrid 28822, Spain
[4] Univ Francisco Vitoria, Fac Med, Carretera Pozuelo,Km 1800, Madrid 28223, Spain
[5] Univ Complutense Madrid, Fac Med, Madrid, Spain
来源
SCIENTIFIC REPORTS | 2023年 / 13卷 / 01期
关键词
INTERRATER RELIABILITY; HIGH AGREEMENT; LOW KAPPA;
D O I
10.1038/s41598-023-49483-6
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
The emergence of large language models (LLM) with remarkable performance such as ChatGPT and GPT-4, has led to an unprecedented uptake in the population. One of their most promising and studied applications concerns education due to their ability to understand and generate human-like text, creating a multitude of opportunities for enhancing educational practices and outcomes. The objective of this study is twofold: to assess the accuracy of ChatGPT/GPT-4 in answering rheumatology questions from the access exam to specialized medical training in Spain (MIR), and to evaluate the medical reasoning followed by these LLM to answer those questions. A dataset, RheumaMIR, of 145 rheumatology-related questions, extracted from the exams held between 2010 and 2023, was created for that purpose, used as a prompt for the LLM, and was publicly distributed. Six rheumatologists with clinical and teaching experience evaluated the clinical reasoning of the chatbots using a 5-point Likert scale and their degree of agreement was analyzed. The association between variables that could influence the models' accuracy (i.e., year of the exam question, disease addressed, type of question and genre) was studied. ChatGPT demonstrated a high level of performance in both accuracy, 66.43%, and clinical reasoning, median (Q1-Q3), 4.5 (2.33-4.67). However, GPT-4 showed better performance with an accuracy score of 93.71% and a median clinical reasoning value of 4.67 (4.5-4.83). These findings suggest that LLM may serve as valuable tools in rheumatology education, aiding in exam preparation and supplementing traditional teaching methods.
引用
收藏
页数:11
相关论文
共 31 条
  • [21] Evaluating Large Language Model-Assisted Emergency Triage: A Comparison of Acuity Assessments by GPT-4 and Medical Experts
    Haim, Gal Ben
    Saban, Mor
    Barash, Yiftach
    Cirulnik, David
    Shaham, Amit
    Eisenman, Ben Zion
    Burshtein, Livnat
    Mymon, Orly
    Klang, Eyal
    JOURNAL OF CLINICAL NURSING, 2024,
  • [22] Performance of GPT-4 on the American College of Radiology In-training Examination: Evaluating Accuracy, Model Drift, and Fine-tuning
    Payne, David L.
    Purohit, Kush
    Borrero, Walter Morales
    Chung, Katherine
    Hao, Max
    Mpoy, Mutshipay
    Jin, Michael
    Prasanna, Prateek
    Hill, Virginia
    ACADEMIC RADIOLOGY, 2024, 31 (07) : 3046 - 3054
  • [23] FROM AI TO KIDNEY CARE: A COMPARATIVE ANALYSIS OF CHATGPT-3.5, GPT-4, AND MICROMEDEX IN EVALUATING NONPRESCRIPTION MEDICATION SAFETY FOR PATIENTS WITH KIDNEY DISEASES
    Sheikh, Mohammad
    Barreto, Erin
    Miao, Jing
    Thongprayoon, Charat
    Gregoire, James
    Dreesman, Benjamin
    Erickson, Stephen
    Qureshi, Eawad
    Craici, Lasmina
    Cheungpasitporn, Wisit
    AMERICAN JOURNAL OF KIDNEY DISEASES, 2024, 83 (04) : S89 - S89
  • [24] Evaluating Bard Gemini Pro and GPT-4 Vision Against Student Performance in Medical Visual Question Answering: Comparative Case Study
    Roos, Jonas
    Martin, Ron
    Kaczmarczyk, Robert
    JMIR FORMATIVE RESEARCH, 2024, 8
  • [25] Comparative performance of artificial intelligence models in rheumatology board-level questions: evaluating Google Gemini and ChatGPT-4o
    Is, Enes Efe
    Menekseoglu, Ahmet Kivanc
    CLINICAL RHEUMATOLOGY, 2024, 43 (11) : 3507 - 3513
  • [26] Evaluating ChatGPT-4 in medical education: an assessment of subject exam performance reveals limitations in clinical curriculum support for students
    Mackey B.P.
    Garabet R.
    Maule L.
    Tadesse A.
    Cross J.
    Weingarten M.
    Discover Artificial Intelligence, 2024, 4 (01):
  • [27] Comparative performance of artificial intelligence models in rheumatology board-level questions: evaluating Google Gemini and ChatGPT-4o: correspondence
    Daungsupawong, Hinpetch
    Wiwanitkit, Viroj
    CLINICAL RHEUMATOLOGY, 2024, 43 (12) : 4015 - 4016
  • [28] Response to: comparative performance of artificial intelligence models in rheumatology board-level questions: evaluating Google Gemini and ChatGPT-4o: correspondence
    Is, Enes Efe
    Menekseoglu, Ahmet Kivanc
    CLINICAL RHEUMATOLOGY, 2024, 43 (12) : 4023 - 4024
  • [29] Evaluating Bard Gemini Pro and GPT-4 Vision Against Student Performance in Medical Visual Question Answering: Comparative Case Study (vol 8, e57592, 2025)
    Roos, Jonas
    Martin, Ron
    Kaczmarczyk, Robert
    JMIR FORMATIVE RESEARCH, 2025, 9