Comparative performance analysis of ChatGPT 3.5, ChatGPT 4.0 and Bard in answering common patient questions on melanoma<show/>

被引:1
|
作者
Deliyannis, Eduardo Panaiotis [1 ]
Paul, Navreet [2 ]
Patel, Priya U. [2 ]
Papanikolaou, Marieta [3 ]
机构
[1] Queen Elizabeth Hosp, Kings Lynn, England
[2] Norfolk & Norwich Univ Hosp, Dermatol Dept, Norwich, England
[3] Kings Coll London, St Johns Inst Dermatol, Sch Basic & Med Biosci, London, England
关键词
D O I
10.1093/ced/llad409
中图分类号
R75 [皮肤病学与性病学];
学科分类号
100206 ;
摘要
This study evaluates the effectiveness of ChatGPT versions 3.5 and 4.0, and Google's Bard in answering patient questions on melanoma. Results show that both versions of ChatGPT outperform Bard, particularly in readability, with no significant difference between the two ChatGPT versions. The study underscores the potential of large language models in healthcare, highlighting the need for professional oversight and further research.
引用
收藏
页码:743 / 746
页数:4
相关论文
共 50 条
  • [1] Comparative analysis of ChatGPT and Bard in answering pathology examination questions requiring image interpretation
    Apornvirat, Sompon
    Namboonlue, Chutimon
    Laohawetwanit, Thiyaphat
    AMERICAN JOURNAL OF CLINICAL PATHOLOGY, 2024, 162 (03) : 252 - 260
  • [2] Performance of ChatGPT on Solving Orthopedic Board-Style Questions: A Comparative Analysis of ChatGPT 3.5 and ChatGPT 4
    Kim, Sung Eun
    Lee, Ji Han
    Choi, Byung Sun
    Han, Hyuk-Soo
    Lee, Myung Chul
    Ro, Du Hyun
    CLINICS IN ORTHOPEDIC SURGERY, 2024, 16 (04) : 669 - 673
  • [3] News Verifiers Showdown: A Comparative Performance Evaluation of ChatGPT 3.5, ChatGPT 4.0, Bing AI, and Bard in News Fact-Checking
    Caramancion, Kevin Matthe
    2023 IEEE FUTURE NETWORKS WORLD FORUM, FNWF, 2024,
  • [4] Evaluating ChatGPT's performance in answering common patient questions on cervical cancer
    Do, Anthony
    Li, Andrew
    Smith, Haller
    Chambers, Laura
    Esselen, Kate
    Liang, Margaret
    GYNECOLOGIC ONCOLOGY, 2024, 190 : S376 - S376
  • [5] AI IN HEPATOLOGY: A COMPARATIVE ANALYSIS OF CHATGPT-4, BING, AND BARD AT ANSWERING CLINICAL QUESTIONS
    Anvari, Sama
    Lee, Yung
    Jin, David S.
    Malone, Sarah
    Collins, Matthew
    GASTROENTEROLOGY, 2024, 166 (05) : S888 - S888
  • [6] Benchmarking large language models' performances for myopia care: a comparative analysis of ChatGPT-3.5, ChatGPT-4.0, and Google Bard
    Lim, Zhi Wei
    Pushpanathan, Krithi
    Yew, Samantha Min Er
    Lai, Yien
    Sun, Chen-Hsin
    Lam, Janice Sing Harn
    Chen, David Ziyou
    Goh, Jocelyn Hui Lin
    Tan, Marcus Chun Jin
    Sheng, Bin
    Cheng, Ching-Yu
    Koh, Victor Teck Chang
    Tham, Yih-Chung
    EBIOMEDICINE, 2023, 95
  • [7] Artificial intelligence in hepatology: a comparative analysis of ChatGPT-4, Bing, and Bard at answering clinical questions
    Anvari, Sama
    Lee, Yung
    Jin, David Shiqiang
    Malone, Sarah
    Collins, Matthew
    JOURNAL OF THE CANADIAN ASSOCIATION OF GASTROENTEROLOGY, 2025,
  • [8] Burn Patient Education in the Modern Age: A Comparative Analysis of ChatGPT and Google Performance Answering Common Questions on Burn Injury and Management
    Pandya, Sumaarg
    Bonetti, Mario Alessandri
    Liu, Hilary Y.
    Jeong, Tiffany
    Ziembicki, Jenny A.
    Egro, Francesco M.
    JOURNAL OF BURN CARE & RESEARCH, 2025,
  • [9] A comparative evaluation of ChatGPT 3.5 and ChatGPT 4 in responses to selected genetics questions
    McGrath, Scott P.
    Kozel, Beth A.
    Gracefo, Sara
    Sutherland, Nykole
    Danford, Christopher J.
    Walton, Nephi
    JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, 2024, 31 (10) : 2271 - 2283
  • [10] Evaluating ChatGPT-3.5 and ChatGPT-4.0 Responses on Hyperlipidemia for Patient Education
    Lee, Thomas J.
    Rao, Abhinav K.
    Campbell, Daniel J.
    Radfar, Navid
    Dayal, Manik
    Khrais, Ayham
    CUREUS JOURNAL OF MEDICAL SCIENCE, 2024, 16 (05)