Comparative performance analysis of ChatGPT 3.5, ChatGPT 4.0 and Bard in answering common patient questions on melanoma<show/>

被引:1
|
作者
Deliyannis, Eduardo Panaiotis [1 ]
Paul, Navreet [2 ]
Patel, Priya U. [2 ]
Papanikolaou, Marieta [3 ]
机构
[1] Queen Elizabeth Hosp, Kings Lynn, England
[2] Norfolk & Norwich Univ Hosp, Dermatol Dept, Norwich, England
[3] Kings Coll London, St Johns Inst Dermatol, Sch Basic & Med Biosci, London, England
关键词
D O I
10.1093/ced/llad409
中图分类号
R75 [皮肤病学与性病学];
学科分类号
100206 ;
摘要
This study evaluates the effectiveness of ChatGPT versions 3.5 and 4.0, and Google's Bard in answering patient questions on melanoma. Results show that both versions of ChatGPT outperform Bard, particularly in readability, with no significant difference between the two ChatGPT versions. The study underscores the potential of large language models in healthcare, highlighting the need for professional oversight and further research.
引用
收藏
页码:743 / 746
页数:4
相关论文
共 50 条
  • [41] Evolving Landscape of Large Language Models: An Evaluation of ChatGPT and Bard in Answering Patient Queries on Colonoscopy
    Tariq, Raseen
    Malik, Sheza
    Khanna, Sahil
    GASTROENTEROLOGY, 2024, 166 (01) : 220 - 221
  • [42] Reply to 'Comment on: Benchmarking the performance of large language models in uveitis: a comparative analysis of ChatGPT-3.5, ChatGPT-4.0, Google Gemini, and Anthropic Claude3'
    Zhao, Fang-Fang
    He, Han-Jie
    Liang, Jia-Jian
    Cen, Ling-Ping
    EYE, 2025,
  • [43] Performance assessment of ChatGPT 4, ChatGPT 3.5, Gemini Advanced Pro 1.5 and Bard 2.0 to problem solving in pathology in French language
    Tarris, Georges
    Martin, Laurent
    DIGITAL HEALTH, 2025, 11
  • [44] Appropriateness and Readability of ChatGPT-3.5 Responses to Common Patient Questions on Age-Related Macular Degeneration
    Challa, Nayanika
    Luskey, Nina
    Wang, Daniel
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2024, 65 (07)
  • [45] Performance evaluation of ChatGPT-4.0 and Gemini on image-based neurosurgery board practice questions: A comparative analysis
    Mcnulty, Alana M.
    Valluri, Harshitha
    Gajjar, Avi A.
    Custozzo, Amanda
    Field, Nicholas C.
    Paul, Alexandra R.
    JOURNAL OF CLINICAL NEUROSCIENCE, 2025, 134
  • [46] A comparative analysis of the ethics of gene editing: ChatGPT vs. Bard
    Burright, Jack
    Al-khateeb, Samer
    COMPUTATIONAL AND MATHEMATICAL ORGANIZATION THEORY, 2024,
  • [47] A Comparative Analysis of ChatGPT, ChatGPT-4, and Google Bard Performances at the Advanced Burn Life Support Exam
    Alessandri-Bonetti, Mario
    Liu, Hilary Y.
    Donovan, James M.
    Ziembicki, Jenny A.
    Egro, Francesco M.
    JOURNAL OF BURN CARE & RESEARCH, 2024, 45 (04): : 945 - 948
  • [48] Comparative analysis of ChatGPT and Gemini (Bard) in medical inquiry: a scoping review
    Fattah, Fattah H.
    Salih, Abdulwahid M.
    Salih, Ameer M.
    Asaad, Saywan K.
    Ghafour, Abdullah K.
    Bapir, Rawa
    Abdalla, Berun A.
    Othman, Snur
    Ahmed, Sasan M.
    Hasan, Sabah Jalal
    Mahmood, Yousif M.
    Kakamad, Fahmi H.
    FRONTIERS IN DIGITAL HEALTH, 2025, 7
  • [49] Human versus Artificial Intelligence: ChatGPT-4 Outperforming Bing, Bard, ChatGPT-3.5 and Humans in Clinical Chemistry Multiple-Choice Questions
    Sallam, Malik
    Al-Salahat, Khaled
    Eid, Huda
    Egger, Jan
    Puladi, Behrus
    ADVANCES IN MEDICAL EDUCATION AND PRACTICE, 2024, 15 : 857 - 871
  • [50] The ability of artificial intelligence tools to formulate orthopaedic clinical decisions in comparison to human clinicians: An analysis of ChatGPT 3.5, ChatGPT 4, and Bard
    Agharia, Suzen
    Szatkowski, Jan
    Fraval, Andrew
    Stevens, Jarrad
    Zhou, Yushy
    JOURNAL OF ORTHOPAEDICS, 2024, 50 : 1 - 7