ChatGPT's competence in responding to urological emergencies

被引:0
|
作者
Ortac, Mazhar [1 ]
Ergul, Rifat Burak [1 ]
Yazili, Huseyin Burak [2 ]
Ozervarli, Muhammet Firat [1 ]
Tonyali, Senol [1 ]
Sarilar, Omer [2 ]
Ozgor, Faruk [2 ]
机构
[1] Istanbul Univ, Istanbul Fac Med, Dept Urol, Istanbul, Turkiye
[2] Haseki Training & Res Hosp, Dept Urol, Istanbul, Turkiye
关键词
Artificial intelligence; ChatGPT; urological emergencies;
D O I
10.14744/tjtes.2024.03377
中图分类号
R4 [临床医学];
学科分类号
1002 ; 100602 ;
摘要
BACKGROUND: In recent years, artificial intelligence (AI) applications have been increasingly used as sources of medical information, alongside their applications in many other fields. This study is the first to evaluate ChatGPT's performance in addressing urological emergencies (UE). METHODS: The study included frequently asked questions (FAQs) by the public regarding UE, as well as UE-related questions formulated based on the European Association of Urology (EAU) guidelines. The FAQs were selected from questions posed by patients to doctors and hospital accounts on social media platforms (Facebook, Instagram, and X) and on websites. All questions were presented to ChatGPT 4 (premium version) in English, and the responses were recorded. Two urologists assessed the quality of the responses using a Global Quality Score (GQS) on a scale of 1 to 5. RESULTS: Of the 73 total FAQs, 53 (72.6%) received a GQS score of 5, while only two (2.7%) received a GQS score of 1. The questions with a GQS score of 1 pertained to priapism and urosepsis. The topic with the highest proportion of responses receiving a GQS score of 5 was urosepsis (82.3%), whereas the lowest scores were observed in questions related to renal trauma (66.7%) and postrenal acute kidney injury (66.7%). A total of 42 questions were formulated based on the EAU guidelines, of which 23 (54.8%) received a GQS score of 5 from the physicians. The mean GQS score for FAQs was 4.38 +/- 1.14, which was significantly higher (p=0.009) than the mean GQS score for EAU guideline-based questions (3.88 +/- 1.47). CONCLUSION: This study demonstrated for the first time that nearly three out of four FAQs were answered accurately and satisfactorily by ChatGPT. However, the accuracy and proficiency of ChatGPT's responses significantly decreased when addressing guideline-based questions on UE.
引用
收藏
页码:291 / 295
页数:5
相关论文
共 50 条
  • [21] COMMENTARY: Responding to adaptation emergencies
    Hall, Jim W.
    Berkhout, Frans
    Douglas, Rowan
    NATURE CLIMATE CHANGE, 2015, 5 (01) : 6 - +
  • [22] Responding to oil burner emergencies
    Montagna, Frank C.
    Fire Engineering, 2000, 153 (08)
  • [23] Focus: urological emergencies during pregnancy
    Hermieu, J. -F.
    PELVI-PERINEOLOGIE, 2007, 2 (03): : 251 - 261
  • [24] Urological Emergencies and Diseases in Wilderness Expeditions
    Cook, Kyle A.
    Bledsoe, Gregory H.
    Canon, Stephen J.
    WILDERNESS & ENVIRONMENTAL MEDICINE, 2021, 32 (03) : 355 - 364
  • [25] Acute urological emergencies past and present
    Beer, E
    ANNALS OF SURGERY, 1933, 98 : 780 - 784
  • [26] ChatGPT and most frequent urological diseases: comment
    Amnuay Kleebayoon
    Viroj Wiwanitkit
    World Journal of Urology, 2023, 41 : 3387 - 3387
  • [27] ChatGPT and most frequent urological diseases: comment
    Kleebayoon, Amnuay
    Wiwanitkit, Viroj
    WORLD JOURNAL OF UROLOGY, 2023, 41 (11) : 3387 - 3387
  • [28] Competence in pulmonary endoscopy emergencies
    Simonassi, Claudio F.
    Majori, Maria
    Covesnon, Maria G.
    Brianti, Annalisa
    Lazzari Agli, Luigi
    Meoni, Eleonora
    Ielpo, Antonella
    Corbetta, Lorenzo
    PANMINERVA MEDICA, 2019, 61 (03) : 386 - 400
  • [29] The psychological impact of responding to agricultural emergencies
    Jenner, Meredith
    AUSTRALIAN JOURNAL OF EMERGENCY MANAGEMENT, 2007, 22 (02): : 25 - 31
  • [30] You asked for urological emergencies ! Please hold on
    Long, J-A
    Boissier, R.
    Savoie, P-H
    PROGRES EN UROLOGIE, 2021, 31 (15): : 943 - 944