ChatGPT's competence in responding to urological emergencies

被引:0
|
作者
Ortac, Mazhar [1 ]
Ergul, Rifat Burak [1 ]
Yazili, Huseyin Burak [2 ]
Ozervarli, Muhammet Firat [1 ]
Tonyali, Senol [1 ]
Sarilar, Omer [2 ]
Ozgor, Faruk [2 ]
机构
[1] Istanbul Univ, Istanbul Fac Med, Dept Urol, Istanbul, Turkiye
[2] Haseki Training & Res Hosp, Dept Urol, Istanbul, Turkiye
关键词
Artificial intelligence; ChatGPT; urological emergencies;
D O I
10.14744/tjtes.2024.03377
中图分类号
R4 [临床医学];
学科分类号
1002 ; 100602 ;
摘要
BACKGROUND: In recent years, artificial intelligence (AI) applications have been increasingly used as sources of medical information, alongside their applications in many other fields. This study is the first to evaluate ChatGPT's performance in addressing urological emergencies (UE). METHODS: The study included frequently asked questions (FAQs) by the public regarding UE, as well as UE-related questions formulated based on the European Association of Urology (EAU) guidelines. The FAQs were selected from questions posed by patients to doctors and hospital accounts on social media platforms (Facebook, Instagram, and X) and on websites. All questions were presented to ChatGPT 4 (premium version) in English, and the responses were recorded. Two urologists assessed the quality of the responses using a Global Quality Score (GQS) on a scale of 1 to 5. RESULTS: Of the 73 total FAQs, 53 (72.6%) received a GQS score of 5, while only two (2.7%) received a GQS score of 1. The questions with a GQS score of 1 pertained to priapism and urosepsis. The topic with the highest proportion of responses receiving a GQS score of 5 was urosepsis (82.3%), whereas the lowest scores were observed in questions related to renal trauma (66.7%) and postrenal acute kidney injury (66.7%). A total of 42 questions were formulated based on the EAU guidelines, of which 23 (54.8%) received a GQS score of 5 from the physicians. The mean GQS score for FAQs was 4.38 +/- 1.14, which was significantly higher (p=0.009) than the mean GQS score for EAU guideline-based questions (3.88 +/- 1.47). CONCLUSION: This study demonstrated for the first time that nearly three out of four FAQs were answered accurately and satisfactorily by ChatGPT. However, the accuracy and proficiency of ChatGPT's responses significantly decreased when addressing guideline-based questions on UE.
引用
收藏
页码:291 / 295
页数:5
相关论文
共 50 条
  • [1] Urological emergencies
    Zaak, D
    Hungerhuber, E
    Müller-Lisse, U
    Hofstetter, A
    Schmeller, N
    UROLOGE A, 2003, 42 (06): : 849 - 863
  • [2] UROLOGICAL EMERGENCIES
    HADFIELD, JIH
    PRACTITIONER, 1977, 218 (1303) : 65 - 73
  • [3] Urological emergencies
    Kranz, Jennifer
    Saar, Matthias
    Steffens, Joachim
    UROLOGIE, 2022, 61 (06): : 585 - 586
  • [4] Urological Emergencies
    Kaelble, T.
    UROLOGE, 2016, 55 (04): : 443 - 443
  • [6] Urological Emergencies For Sports
    Colen, Adam
    Montero, Daniel
    MEDICINE AND SCIENCE IN SPORTS AND EXERCISE, 2014, 46 (05): : 289 - 289
  • [7] MAJOR UROLOGICAL EMERGENCIES
    HARRISON, JH
    PERLMUTT.AD
    SURGICAL CLINICS OF NORTH AMERICA, 1966, 46 (03) : 685 - &
  • [8] Pediatric Urological Emergencies
    Lambert, Sarah M.
    PEDIATRIC CLINICS OF NORTH AMERICA, 2012, 59 (04) : 965 - +
  • [9] Urological Cancers and ChatGPT: Comment
    Kleebayoon, Amnuay
    Wiwanitkit, Viroj
    CLINICAL GENITOURINARY CANCER, 2024, 22 (03)
  • [10] Responding to the media in emergencies
    Nagle, C
    JOURNAL OF NUCLEAR MEDICINE, 2003, 44 (04) : 11N - 11N