ChatGPT's competence in responding to urological emergencies

被引:0
|
作者
Ortac, Mazhar [1 ]
Ergul, Rifat Burak [1 ]
Yazili, Huseyin Burak [2 ]
Ozervarli, Muhammet Firat [1 ]
Tonyali, Senol [1 ]
Sarilar, Omer [2 ]
Ozgor, Faruk [2 ]
机构
[1] Istanbul Univ, Istanbul Fac Med, Dept Urol, Istanbul, Turkiye
[2] Haseki Training & Res Hosp, Dept Urol, Istanbul, Turkiye
关键词
Artificial intelligence; ChatGPT; urological emergencies;
D O I
10.14744/tjtes.2024.03377
中图分类号
R4 [临床医学];
学科分类号
1002 ; 100602 ;
摘要
BACKGROUND: In recent years, artificial intelligence (AI) applications have been increasingly used as sources of medical information, alongside their applications in many other fields. This study is the first to evaluate ChatGPT's performance in addressing urological emergencies (UE). METHODS: The study included frequently asked questions (FAQs) by the public regarding UE, as well as UE-related questions formulated based on the European Association of Urology (EAU) guidelines. The FAQs were selected from questions posed by patients to doctors and hospital accounts on social media platforms (Facebook, Instagram, and X) and on websites. All questions were presented to ChatGPT 4 (premium version) in English, and the responses were recorded. Two urologists assessed the quality of the responses using a Global Quality Score (GQS) on a scale of 1 to 5. RESULTS: Of the 73 total FAQs, 53 (72.6%) received a GQS score of 5, while only two (2.7%) received a GQS score of 1. The questions with a GQS score of 1 pertained to priapism and urosepsis. The topic with the highest proportion of responses receiving a GQS score of 5 was urosepsis (82.3%), whereas the lowest scores were observed in questions related to renal trauma (66.7%) and postrenal acute kidney injury (66.7%). A total of 42 questions were formulated based on the EAU guidelines, of which 23 (54.8%) received a GQS score of 5 from the physicians. The mean GQS score for FAQs was 4.38 +/- 1.14, which was significantly higher (p=0.009) than the mean GQS score for EAU guideline-based questions (3.88 +/- 1.47). CONCLUSION: This study demonstrated for the first time that nearly three out of four FAQs were answered accurately and satisfactorily by ChatGPT. However, the accuracy and proficiency of ChatGPT's responses significantly decreased when addressing guideline-based questions on UE.
引用
收藏
页码:291 / 295
页数:5
相关论文
共 50 条
  • [31] Urological emergencies: unmasking massive bladder hematomas
    Pawar, Ajay
    Shinde, Varsha
    CLINICAL AND EXPERIMENTAL EMERGENCY MEDICINE, 2024, 11 (03): : 314 - 315
  • [32] Epidemiology of urological emergencies in a teaching hospital.
    Mondet, F
    Chartier-Kastler, E
    Yonneau, L
    Bohin, D
    Barrou, B
    Richard, F
    PROGRES EN UROLOGIE, 2002, 12 (03): : 437 - 442
  • [34] Urological emergencies in the cancer patient - diagnosis and treatment
    Keane, P. F.
    O'Kane, H. F.
    EJC SUPPLEMENTS, 2007, 5 (05): : 339 - 349
  • [35] Identifying the Validity of ChatGPT in the Diagnosis of Orthopaedic Emergencies
    Pankhurst, A.
    Kumar, N.
    BRITISH JOURNAL OF SURGERY, 2024, 111
  • [36] Urological emergencies during a medical home visit
    Goepel, M
    INTERNIST, 2001, 42 (11): : 1465 - +
  • [37] ABDOMINAL SURGICAL EMERGENCIES FOLLOWING UROLOGICAL OPERATIONS
    DEMETRIAD, MR
    JOURNAL D UROLOGIE ET DE NEPHROLOGIE, 1974, 80 (03): : 265 - 267
  • [38] ABC of urology - Urological emergencies in general practice
    Dawson, C
    Whitfield, H
    BRITISH MEDICAL JOURNAL, 1996, 312 (7034): : 838 - 840
  • [39] Paging Dr. ChatGPT: safety, accuracy and readability of ChatGPT in ENT emergencies
    Soon, Stephanie
    Perry, Brendan
    AUSTRALIAN JOURNAL OF OTOLARYNGOLOGY, 2025, 8
  • [40] Strategies for China's Historic Districts Regeneration in Responding to Public Health Emergencies
    Gai, Qiyu
    Li, Zijia
    Hu, Huifeng
    SUSTAINABILITY, 2022, 14 (21)