Harnessing ChatGPT and GPT-4 for evaluating the rheumatology questions of the Spanish access exam to specialized medical training

被引:26
|
作者
Madrid-Garcia, Alfredo [1 ]
Rosales-Rosado, Zulema [1 ]
Freites-Nunez, Dalifer [1 ]
Perez-Sancristobal, Ines [1 ]
Pato-Cour, Esperanza [1 ]
Plasencia-Rodriguez, Chamaida [2 ]
Cabeza-Osorio, Luis [3 ,4 ]
Abasolo-Alcazar, Lydia [1 ]
Leon-Mateos, Leticia [1 ]
Fernandez-Gutierrez, Benjamin [1 ,5 ]
Rodriguez-Rodriguez, Luis [1 ]
机构
[1] Inst Invest Sanitaria Hosp Clin San Carlos IdISSC, Hosp Clin San Carlos, Grp Patol Musculoesquelet, Prof Martin Lagos S-N, Madrid 28040, Spain
[2] Hosp Univ La Paz IdiPaz, Reumatol, Paseo Castellana,261, Madrid 28046, Spain
[3] Hosp Univ Henares, Med Interna, Ave Marie Curie,0, Madrid 28822, Spain
[4] Univ Francisco Vitoria, Fac Med, Carretera Pozuelo,Km 1800, Madrid 28223, Spain
[5] Univ Complutense Madrid, Fac Med, Madrid, Spain
来源
SCIENTIFIC REPORTS | 2023年 / 13卷 / 01期
关键词
INTERRATER RELIABILITY; HIGH AGREEMENT; LOW KAPPA;
D O I
10.1038/s41598-023-49483-6
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
The emergence of large language models (LLM) with remarkable performance such as ChatGPT and GPT-4, has led to an unprecedented uptake in the population. One of their most promising and studied applications concerns education due to their ability to understand and generate human-like text, creating a multitude of opportunities for enhancing educational practices and outcomes. The objective of this study is twofold: to assess the accuracy of ChatGPT/GPT-4 in answering rheumatology questions from the access exam to specialized medical training in Spain (MIR), and to evaluate the medical reasoning followed by these LLM to answer those questions. A dataset, RheumaMIR, of 145 rheumatology-related questions, extracted from the exams held between 2010 and 2023, was created for that purpose, used as a prompt for the LLM, and was publicly distributed. Six rheumatologists with clinical and teaching experience evaluated the clinical reasoning of the chatbots using a 5-point Likert scale and their degree of agreement was analyzed. The association between variables that could influence the models' accuracy (i.e., year of the exam question, disease addressed, type of question and genre) was studied. ChatGPT demonstrated a high level of performance in both accuracy, 66.43%, and clinical reasoning, median (Q1-Q3), 4.5 (2.33-4.67). However, GPT-4 showed better performance with an accuracy score of 93.71% and a median clinical reasoning value of 4.67 (4.5-4.83). These findings suggest that LLM may serve as valuable tools in rheumatology education, aiding in exam preparation and supplementing traditional teaching methods.
引用
收藏
页数:11
相关论文
共 31 条
  • [1] Harnessing ChatGPT and GPT-4 for evaluating the rheumatology questions of the Spanish access exam to specialized medical training
    Alfredo Madrid-García
    Zulema Rosales-Rosado
    Dalifer Freites-Nuñez
    Inés Pérez-Sancristóbal
    Esperanza Pato-Cour
    Chamaida Plasencia-Rodríguez
    Luis Cabeza-Osorio
    Lydia Abasolo-Alcázar
    Leticia León-Mateos
    Benjamín Fernández-Gutiérrez
    Luis Rodríguez-Rodríguez
    Scientific Reports, 13 (1)
  • [2] Augmenting Medical Education: An Evaluation of GPT-4 and ChatGPT in Answering Rheumatology Questions from the Spanish Medical Licensing Examination
    Madrid Garcia, Alfredo
    Rosales, Zulema
    Freites, Dalifer
    Perez Sancristobal, Ines
    Fernandez, Benjamin
    Rodriguez Rodriguez, Luis
    ARTHRITIS & RHEUMATOLOGY, 2023, 75 : 4095 - 4097
  • [3] Performance of GPT-4 Vision on kidney pathology exam questions
    Miao, Jing
    Thongprayoon, Charat
    Cheungpasitporn, Wisit
    Cornell, Lynn D.
    AMERICAN JOURNAL OF CLINICAL PATHOLOGY, 2024, 162 (03) : 220 - 226
  • [4] Performance of GPT-4 Vision on kidney pathology exam questions
    Daungsupawong, Hinpetch
    Wiwanitkit, Viroj
    AMERICAN JOURNAL OF CLINICAL PATHOLOGY, 2024,
  • [5] Reply to "Performance of GPT-4 Vision on kidney pathology exam questions"
    Miao, Jing
    Thongprayoon, Charat
    Cheungpasitporn, Wisit
    Cornell, Lynn D.
    AMERICAN JOURNAL OF CLINICAL PATHOLOGY, 2024,
  • [6] Performance of ChatGPT and GPT-4 on Polish National Specialty Exam (NSE) in Ophthalmology
    Ciekalski, Marcin
    Laskowski, Maciej
    Koperczak, Agnieszka
    Smierciak, Maria
    Sirek, Sebastian
    POSTEPY HIGIENY I MEDYCYNY DOSWIADCZALNEJ, 2024, 78 (01): : 111 - 116
  • [7] Re-evaluating GPT-4's bar exam performance
    Martinez, Eric
    ARTIFICIAL INTELLIGENCE AND LAW, 2024,
  • [8] ChatGPT surges ahead: GPT-4 has arrived in the arena of medical research
    Wang, Ying-Mei
    Chen, Tzeng-Ji
    JOURNAL OF THE CHINESE MEDICAL ASSOCIATION, 2023, 86 (09) : 784 - 785
  • [9] Evaluating Large Language Models for the National Premedical Exam in India: Comparative Analysis of GPT-3.5, GPT-4, and Bard
    Farhat, Faiza
    Chaudhry, Beenish Moalla
    Nadeem, Mohammad
    Sohail, Shahab Saquib
    Madsen, Dag Oivind
    JMIR MEDICAL EDUCATION, 2024, 10
  • [10] INTERVENTIONAL NEPHROLOGY ASSESSMENT QUESTIONS: A PERFORMANCE EVALUATION AND COMPARATIVE ANALYSIS OF CHATGPT-3.5 AND GPT-4
    Sheikh, Mohammad
    Qureshi, Fawad
    Thongprayoon, Charat
    Suarez, Lourdes Gonzalez
    Craici, Lasmina
    Cheungpasitporn, Visit
    AMERICAN JOURNAL OF KIDNEY DISEASES, 2024, 83 (04) : S100 - S101