Comparative performance analysis of ChatGPT 3.5, ChatGPT 4.0 and Bard in answering common patient questions on melanoma<show/>

被引:1
|
作者
Deliyannis, Eduardo Panaiotis [1 ]
Paul, Navreet [2 ]
Patel, Priya U. [2 ]
Papanikolaou, Marieta [3 ]
机构
[1] Queen Elizabeth Hosp, Kings Lynn, England
[2] Norfolk & Norwich Univ Hosp, Dermatol Dept, Norwich, England
[3] Kings Coll London, St Johns Inst Dermatol, Sch Basic & Med Biosci, London, England
关键词
D O I
10.1093/ced/llad409
中图分类号
R75 [皮肤病学与性病学];
学科分类号
100206 ;
摘要
This study evaluates the effectiveness of ChatGPT versions 3.5 and 4.0, and Google's Bard in answering patient questions on melanoma. Results show that both versions of ChatGPT outperform Bard, particularly in readability, with no significant difference between the two ChatGPT versions. The study underscores the potential of large language models in healthcare, highlighting the need for professional oversight and further research.
引用
收藏
页码:743 / 746
页数:4
相关论文
共 50 条
  • [31] Comments on "Performance of ChatGPT in Answering Clinical Questions on the Practical Guideline of Blepharoptosis"
    Hashemi, Saleh
    Karbalaei, Mohsen
    Keikha, Masoud
    AESTHETIC PLASTIC SURGERY, 2024,
  • [32] Performance of ChatGPT and Bard in self-assessment questions for nephrology board renewal
    Noda, Ryunosuke
    Izaki, Yuto
    Kitano, Fumiya
    Komatsu, Jun
    Ichikawa, Daisuke
    Shibagaki, Yugo
    CLINICAL AND EXPERIMENTAL NEPHROLOGY, 2024, 28 (05) : 465 - 469
  • [33] How AI Responds to Common Lung Cancer Questions: ChatGPT vs Google Bard
    Rahsepar, Amir Ali
    Tavakoli, Neda
    Kim, Grace Hyun J.
    Hassani, Cameron
    Abtin, Fereidoun
    Bedayat, Arash
    RADIOLOGY, 2023, 307 (05)
  • [34] Performance of ChatGPT-3.5, ChatGPT-4, Microsoft Copilot, and Google Bard To Identify Correct Information for Lung Cancer
    Le, Hoa
    Truong, Chi
    PHARMACOEPIDEMIOLOGY AND DRUG SAFETY, 2024, 33 : 347 - 348
  • [35] Comment on: "Benchmarking the performance of large language models in uveitis: a comparative analysis of ChatGPT-3.5, ChatGPT-4.0, Google Gemini, and Anthropic Claude3"
    Luo, Xiao
    Tang, Cheng
    Chen, Jin-Jin
    Yuan, Jin
    Huang, Jin-Jin
    Yan, Tao
    EYE, 2025,
  • [36] Analysis of ChatGPT responses to patient-oriented questions on common ophthalmic procedures
    Solli, Elena M.
    Tsui, Edmund
    Mehta, Nitish
    CLINICAL AND EXPERIMENTAL OPHTHALMOLOGY, 2024, 52 (04): : 487 - 491
  • [37] Assessment Study of ChatGPT-3.5's Performance on the Final Polish Medical Examination: Accuracy in Answering 980 Questions
    Siebielec, Julia
    Ordak, Michal
    Oskroba, Agata
    Dworakowska, Anna
    Bujalska-Zadrozny, Magdalena
    HEALTHCARE, 2024, 12 (16)
  • [38] Comment on "ChatGPT Answers Common Patient Questions About Colonoscopy"
    Wu, Qiqi
    GASTROENTEROLOGY, 2024, 166 (01) : 219 - 220
  • [39] Performance of ChatGPT-4 and Bard chatbots in responding to common patient questions on prostate cancer 177Lu-PSMA-617 therapy
    Bilgin, Gokce Belge
    Bilgin, Cem
    Childs, Daniel S.
    Orme, Jacob J.
    Burkett, Brian J.
    Packard, Ann T.
    Johnson, Derek R.
    Thorpe, Matthew P.
    Riaz, Irbaz Bin
    Halfdanarson, Thorvardur R.
    Johnson, Geoffrey B.
    Sartor, Oliver
    Kendi, Ayse Tuba
    FRONTIERS IN ONCOLOGY, 2024, 14
  • [40] ANSWERING COMMON UROLOGICAL QUESTIONS- CHATGPT VS. UROLOGY CARE FOUNDATION PATIENT EDUCATION MATERIALS
    Schwartz, Adam
    JOURNAL OF UROLOGY, 2024, 211 (05): : E297 - E298