Human versus Artificial Intelligence: ChatGPT-4 Outperforming Bing, Bard, ChatGPT-3.5 and Humans in Clinical Chemistry Multiple-Choice Questions

被引:3
|
作者
Sallam, Malik [1 ,2 ,3 ]
Al-Salahat, Khaled [1 ,2 ,3 ]
Eid, Huda [3 ]
Egger, Jan [4 ]
Puladi, Behrus [5 ]
机构
[1] Univ Jordan, Sch Med, Dept Pathol Microbiol & Forens Med, Amman, Jordan
[2] Jordan Univ Hosp, Dept Clin Labs & Forens Med, Amman, Jordan
[3] Univ Jordan, Sci Approaches Fight Epidem Infect Dis SAFE ID Res, Amman, Jordan
[4] Univ Med Essen AoR, Inst Med AI IKIM, Essen, Germany
[5] Univ Hosp RWTH Aachen, Inst Med Informat, Aachen, Germany
关键词
AI in healthcare education; higher education; large language models; evaluation; TOOL;
D O I
10.2147/AMEP.S479801
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
Introduction: Artificial intelligence (AI) chatbots excel in language understanding and generation. These models can transform healthcare education and practice. However, it is important to assess the performance of such AI models in various topics to highlight its strengths and possible limitations. This study aimed to evaluate the performance of ChatGPT (GPT-3.5 and GPT-4), Bing, and Bard compared to human students at a postgraduate master's level in Medical Laboratory Sciences. Methods: The study design was based on the METRICS checklist for the design and reporting of AI-based studies in healthcare. The study utilized a dataset of 60 Clinical Chemistry multiple-choice questions (MCQs) initially conceived for assessing 20 MSc students. The revised Bloom's taxonomy was used as the framework for classifying the MCQs into four cognitive categories: Remember, Understand, Analyze, and Apply. A modified version of the CLEAR tool was used for the assessment of the quality of AI-generated content, with Cohen's kappa for inter-rater agreement. Results: Compared to the mean students' score which was 0.68 +/- 0.23, GPT-4 scored 0.90 +/- 0.30, followed by Bing (0.77 +/- 0.43), GPT-3.5 (0.73 +/- 0.45), and Bard (0.67 +/- 0.48). Statistically significant better performance was noted in lower cognitive domains (Remember and Understand) in GPT-3.5 (P=0.041), GPT-4 (P=0.003), and Bard (P=0.017) compared to the higher cognitive domains (Apply and Analyze). The CLEAR scores indicated that ChatGPT-4 performance was "Excellent" compared to the "Above average" performance of ChatGPT-3.5, Bing, and Bard. Discussion: The findings indicated that ChatGPT-4 excelled in the Clinical Chemistry exam, while ChatGPT-3.5, Bing, and Bard were above average. Given that the MCQs were directed to postgraduate students with a high degree of specialization, the performance of these AI chatbots was remarkable. Due to the risk of academic dishonesty and possible dependence on these AI models, the appropriateness of MCQs as an assessment tool in higher education should be re-evaluated.
引用
收藏
页码:857 / 871
页数:15
相关论文
共 18 条
  • [1] Artificial intelligence in hepatology: a comparative analysis of ChatGPT-4, Bing, and Bard at answering clinical questions
    Anvari, Sama
    Lee, Yung
    Jin, David Shiqiang
    Malone, Sarah
    Collins, Matthew
    JOURNAL OF THE CANADIAN ASSOCIATION OF GASTROENTEROLOGY, 2025,
  • [2] Evaluating the Sensitivity, Specificity, and Accuracy of ChatGPT-3.5, ChatGPT-4, Bing AI, and Bard Against Conventional Drug-Drug Interactions Clinical Tools
    Al-Ashwal, Fahmi Y.
    Zawiah, Mohammed
    Gharaibeh, Lobna
    Abu-Farha, Rana
    Bitar, Ahmad Naoras
    DRUG HEALTHCARE AND PATIENT SAFETY, 2023, 15 : 137 - 147
  • [3] CAN ARTIFICIAL INTELLIGENCE ASSESS SUICIDE RISK IN YOUTH?: COMPARING CHATGPT-3.5, CHATGPT-4, AND CHATGPT-4O TO PSYCHIATRISTS
    Nguyen, Lily T.
    Tran, Viet T.
    Mathesh, Vivek
    Tran, Jessica T.
    Ahmed, Youssef
    Liu-Zarzuela, Jasmine A.
    Oorjitham, Navin S.
    JOURNAL OF THE AMERICAN ACADEMY OF CHILD AND ADOLESCENT PSYCHIATRY, 2024, 63 (10): : S181 - S181
  • [4] AI IN HEPATOLOGY: A COMPARATIVE ANALYSIS OF CHATGPT-4, BING, AND BARD AT ANSWERING CLINICAL QUESTIONS
    Anvari, Sama
    Lee, Yung
    Jin, David S.
    Malone, Sarah
    Collins, Matthew
    GASTROENTEROLOGY, 2024, 166 (05) : S888 - S888
  • [5] Comparison of the problem-solving performance of ChatGPT-3.5, ChatGPT-4, Bing Chat, and Bard for the Korean emergency medicine board examination question bank
    Lee, Go Un
    Hong, Dae Young
    Kim, Sin Young
    Kim, Jong Won
    Lee, Young Hwan
    Park, Sang O.
    Lee, Kyeong Ryong
    MEDICINE, 2024, 103 (09) : E37325
  • [6] Analysing the Applicability of ChatGPT, Bard, and Bing to Generate Reasoning-Based Multiple-Choice Questions in Medical Physiology
    Agarwal, Mayank
    Sharma, Priyanka
    Goswami, Ayan
    CUREUS JOURNAL OF MEDICAL SCIENCE, 2023, 15 (06)
  • [7] ChatGPT-4 Omni's superiority in answering multiple-choice oral radiology questions
    Tassoker, Melek
    BMC ORAL HEALTH, 2025, 25 (01):
  • [8] A Comparative Analysis of ChatGPT-4, Microsoft's Bing and Google's Bard at Answering Rheumatology Clinical Questions
    Yingchoncharoen, Pitchaporn
    Chaisrimaneepan, Nattanicha
    Pangkanon, Watsachon
    Thongpiya, Jerapas
    ARTHRITIS & RHEUMATOLOGY, 2024, 76 : 2654 - 2655
  • [9] ChatGPT-4 Surpasses Residents: A Study of Artificial Intelligence Competency in Plastic Surgery In-service Examinations and Its Advancements from ChatGPT-3.5
    Hubany, Shannon S.
    Scala, Fernanda D.
    Hashemi, Kiana
    Kapoor, Saumya
    Fedorova, Julia R.
    Vaccaro, Matthew J.
    Ridout, Rees P.
    Hedman, Casey C.
    Kellogg, Brian C.
    Barone, Angelo A. Leto
    PLASTIC AND RECONSTRUCTIVE SURGERY-GLOBAL OPEN, 2024, 12 (09)
  • [10] Evaluating ChatGPT-3.5 and Claude-2 in Answering and Explaining Conceptual Medical Physiology Multiple-Choice Questions
    Agarwal, Mayank
    Goswami, Ayan
    Sharma, Priyanka
    CUREUS JOURNAL OF MEDICAL SCIENCE, 2023, 15 (09)