Assessing the knowledge of ChatGPT and Google Gemini in answering peripheral artery disease-related questions

被引:0
|
作者
Cetin, Hakki Kursat [1 ]
Demir, Tolga [1 ]
机构
[1] SBU Sisli Hamidiye Etfal Training & Res Hosp, Dept Cardiovasc Surg, Halaskargazi St, TR-34371 Sisli, Turkiye
关键词
Artificial intelligence; ChatGPT; Google Gemini; Global Quality Score; peripheral artery disease;
D O I
10.1177/17085381251315999
中图分类号
R6 [外科学];
学科分类号
1002 ; 100210 ;
摘要
Introduction To assess and compare the knowledge of ChatGPT and Google Gemini in answering public-based and scientific questions about peripheral artery disease (PAD).Methods Frequently asked questions (FAQs) about PAD were generated by evaluating posts on social media, and the latest edition of the European Society of Cardiology (ESC) guideline was evaluated and recommendations about PAD were translated into questions. All questions were prepared in English and were asked to ChatGPT 4 and Google Gemini (formerly Google Bard) applications. The specialists assigned a Global Quality Score (GQS) for each response.Results Finally, 72 FAQs and 63 ESC guideline-based questions were identified. In total, 51 (70.8%) answers by ChatGPT for FAQs were categorized as GQS 5. Moreover, 44 (69.8%) ChatGPT answers to ESC guideline-based questions about PAD scored GQS 5. A total of 40 (55.6%) answers by Google Gemini for FAQs related with PAD obtained GQS 5. In addition, 50.8% (32 of 63) Google Gemini answers to ESC guideline-based questions were classified as GQS 5. Comparison of ChatGPT and Google Gemini with regards to GQS score revealed that both for FAQs about PAD, and ESC guideline-based scientific questions about PAD, ChatGPT gave more accurate and satisfactory answers (p = 0.031 and p = 0.026). In contrast, response time was significantly shorter for Google Gemini for both FAQs and scientific questions about PAD (p = 0.008 and p = 0.001).Conclusion Our findings revealed that both ChatGPT and Google Gemini had limited capacity to answer FAQs and scientific questions related with PDA, but accuracy and satisfactory rate of answers for both FAQs and scientific questions about PAD were significantly higher in favor of ChatGPT.
引用
收藏
页数:6
相关论文
共 50 条
  • [31] A Scoping Review of Measurement Tools Evaluating Awareness and Disease-Related Knowledge in Peripheral Arterial Disease Patients
    Felix, Carolina Machado de Melo
    Pereira, Danielle Aparecida Gomes
    Pakosh, Maureen
    da Silva, Lilian Pinto
    Ghisi, Gabriela Lima de Melo
    JOURNAL OF CLINICAL MEDICINE, 2024, 13 (01)
  • [32] Response to letter to the editor re "Evaluating the performance of ChatGPT in answering questions related to pediatric urology"
    Caglar, Ufuk
    Ozgor, Faruk
    JOURNAL OF PEDIATRIC UROLOGY, 2024, 20 (01) : 27 - 27
  • [33] How good is ChatGPT at answering patients' questions related to early detection of oral (mouth) cancer?
    Hassona, Yazan
    Alqaisi, Dua'a
    AL-Haddad, Alaa
    Georgakopoulou, Eleni A.
    Malamos, Dimitris
    Alrashdan, Mohammad S.
    Sawair, Faleh
    ORAL SURGERY ORAL MEDICINE ORAL PATHOLOGY ORAL RADIOLOGY, 2024, 138 (02): : 269 - 278
  • [34] Disease-related knowledge in Australian children and their parents
    Day, A. S.
    Clarkson, C.
    Shalloo, N.
    JOURNAL OF GASTROENTEROLOGY AND HEPATOLOGY, 2006, 21 : A348 - A348
  • [35] Assessing the Capability of ChatGPT in Answering First- and Second-Order Knowledge Questions on Microbiology as per Competency- Based Medical Education Curriculum
    Das, Dipmala
    Kumar, Nikhil
    Longjam, Langamba Angom
    Sinha, Ranwir
    Roy, Asitava Deb
    Mondal, Himel
    Gupta, Pratima
    CUREUS JOURNAL OF MEDICAL SCIENCE, 2023, 15 (03)
  • [36] Burn Patient Education in the Modern Age: A Comparative Analysis of ChatGPT and Google Performance Answering Common Questions on Burn Injury and Management
    Pandya, Sumaarg
    Bonetti, Mario Alessandri
    Liu, Hilary Y.
    Jeong, Tiffany
    Ziembicki, Jenny A.
    Egro, Francesco M.
    JOURNAL OF BURN CARE & RESEARCH, 2025,
  • [37] Clinical Decision Support for Peripheral Artery Disease: Answering the Call
    Monteleone, Peter P.
    Shishehbor, Mehdi H.
    JOURNAL OF THE AMERICAN HEART ASSOCIATION, 2018, 7 (23):
  • [38] Assessing Ability for ChatGPT to Answer Total Knee Arthroplasty-Related Questions
    Magruder, Matthew L.
    Rodriguez, Ariel N.
    Wong, Jason C. J.
    Erez, Orry
    Piuzzi, Nicolas S.
    Scuderi, Gil R.
    Slover, James D.
    Oh, Jason H.
    Schwarzkopf, Ran
    Chen, Antonia F.
    Iorio, Richard
    Goodman, Stuart B.
    Mont, Michael A.
    JOURNAL OF ARTHROPLASTY, 2024, 39 (08):
  • [39] Letter 2 regarding "Assessing the performance of ChatGPT in answering questions regarding cirrho- sis and hepatocellular carcinoma"
    Kleebayoon, Amnuay
    Wiwanitkit, Viroj
    CLINICAL AND MOLECULAR HEPATOLOGY, 2023, 29 (03) : 815 - 816
  • [40] The performance of OpenAI ChatGPT-4 and Google Gemini in virology multiple-choice questions: a comparative analysis of English and Arabic responses
    Sallam, Malik
    Al-Mahzoum, Kholoud
    Almutawaa, Rawan Ahmad
    Alhashash, Jasmen Ahmad
    Dashti, Retaj Abdullah
    Alsafy, Danah Raed
    Almutairi, Reem Abdullah
    Barakat, Muna
    BMC RESEARCH NOTES, 2024, 17 (01)