BI-RADS Category Assignments by GPT-3.5, GPT-4, and Google Bard: A Multilanguage Study

被引:16
|
作者
Cozzi, Andrea [1 ]
Pinker, Katja [2 ]
Hidber, Andri [3 ]
Zhang, Tianyu [4 ,5 ,6 ]
Bonomo, Luca [1 ]
Lo Gullo, Roberto [2 ,4 ]
Christianson, Blake [2 ]
Curti, Marco [1 ]
Rizzo, Stefania [1 ,3 ]
Del Grande, Filippo [1 ,3 ]
Mann, Ritse M. [4 ,5 ]
Schiaffino, Simone [1 ,3 ]
机构
[1] Ente Osped Cantonale, Imaging Inst Southern Switzerland IIMSI, ViaTesserete 46, CH-6900 Lugano, Switzerland
[2] Mem Sloan Kettering Canc Ctr, Dept Radiol, Breast Imaging Serv, New York, NY USA
[3] Univ Svizzera italiana, Fac Biomed Sci, Lugano, Switzerland
[4] Netherlands Canc Inst, Dept Radiol, Amsterdam, Netherlands
[5] Radboud Univ Nijmegen, Dept Diagnost Imaging, Med Ctr, NL-6500 HB Nijmegen, Netherlands
[6] Maastricht Univ, GROW Res Inst Oncol & Reprod, Maastricht, Netherlands
关键词
INTEROBSERVER VARIABILITY; AGREEMENT; RELIABILITY;
D O I
10.1148/radiol.232133
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Background: The performance of publicly available large language models (LLMs) remains unclear for complex clinical tasks. Purpose: To evaluate the agreement between human readers and LLMs for Breast Imaging Reporting and Data System (BI-RADS) categories assigned based on breast imaging reports written in three languages and to assess the impact of discordant category assignments on clinical management. Materials and Methods: This retrospective study included reports for women who underwent MRI, mammography, and/or US breast cancer screening or diagnostic purposes at three referral centers. Reports with findings categorized as BI-RADS 1-5 and in Italian, English, or Dutch were collected between January 2000 and October 2023. Board -certified breast radiologists and LLMs GPT-3.5 and GPT-4 (OpenAI) and Bard, now called Gemini (Google), assigned BI-RADS categories using only the described by the original radiologists. Agreement between human readers and LLMs for BI-RADS categories was assessed using Gwet agreement coefficient (AC1 value). Frequencies were calculated for changes in BI-RADS category assignments that would clinical management (ie, BI-RADS 0 vs BI-RADS 1 or 2 vs BI-RADS 3 vs BI-RADS 4 or 5) and compared using the McNemar test. Results: Across 2400 reports, agreement between the original and reviewing radiologists was almost perfect (AC1 = 0.91), while agreement between the original radiologists and GPT-4, GPT-3.5, and Bard was moderate (AC1 = 0.52, 0.48, and 0.42, respectively). Across human readers and LLMs, differences were observed in the frequency of BI-RADS category upgrades or downgrades that result in changed clinical management (118 of 2400 [4.9%] for human readers, 611 of 2400 [25.5%] for Bard, 573 of 2400 for GPT-3.5, and 435 of 2400 [18.1%] for GPT-4; P < .001) and that would negatively impact clinical management (37 of 2400 [1.5%] for human readers, 435 of 2400 [18.1%] for Bard, 344 of 2400 [14.3%] for GPT-3.5, and 255 of 2400 [10.6%] for P < .001). Conclusion: LLMs achieved moderate agreement with human reader-assigned BI-RADS categories across reports written in three languages but also yielded a high percentage of discordant BI-RADS categories that would negatively impact clinical management.
引用
收藏
页数:8
相关论文
共 50 条
  • [41] Assessing readability of explanations and reliability of answers by GPT-3.5 and GPT-4 in non-traumatic spinal cord injury education
    Garcia-Rudolph, Alejandro
    Sanchez-Pinsach, David
    Wright, Mark Andrew
    Opisso, Eloy
    Vidal, Joan
    MEDICAL TEACHER, 2024,
  • [42] Cognitive Network Science Reveals Bias in GPT-3, GPT-3.5 Turbo, and GPT-4 Mirroring Math Anxiety in High-School Students
    Abramski, Katherine
    Citraro, Salvatore
    Lombardi, Luigi
    Rossetti, Giulio
    Stella, Massimo
    BIG DATA AND COGNITIVE COMPUTING, 2023, 7 (03)
  • [43] Exploring new educational approaches in neuropathic pain: assessing accuracy and consistency of artificial intelligence responses from GPT-3.5 and GPT-4
    Garcia-Rudolph, Alejandro
    Sanchez-Pinsach, David
    Opisso, Eloy
    Soler, Maria Dolors
    PAIN MEDICINE, 2024, 26 (01) : 48 - 50
  • [44] Evaluating prompt engineering on GPT-3.5's performance in USMLE-style medical calculations and clinical scenarios generated by GPT-4
    Patel, Dhavalkumar
    Raut, Ganesh
    Zimlichman, Eyal
    Cheetirala, Satya Narayan
    Nadkarni, Girish N.
    Glicksberg, Benjamin S.
    Apakama, Donald U.
    Bell, Elijah J.
    Freeman, Robert
    Timsina, Prem
    Klang, Eyal
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [45] Evaluating the GPT-3.5 and GPT-4 Large Language Models for Zero-Shot Classification of South African Violent Event Data
    Kotze, Eduan
    Senekal, Burgert A.
    2024 7TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE, BIG DATA, COMPUTING AND DATA COMMUNICATION SYSTEMS, ICABCD 2024, 2024,
  • [46] RE: Exploring new educational approaches in neuropathic pain: assessing accuracy and consistency of AI responses from GPT-3.5 and GPT-4
    Daungsupawong, Hinpetch
    Wiwanitkit, Viroj
    PAIN MEDICINE, 2024,
  • [47] RE: Exploring new educational approaches in neuropathic pain: assessing accuracy and consistency of AI responses from GPT-3.5 and GPT-4
    Garcia-Rudolph, Alejandro
    Sanchez-Pinsach, David
    Opisso, Eloy
    Soler, Maria Dolors
    PAIN MEDICINE, 2024,
  • [48] Comment on: ‘Comparison of GPT-3.5, GPT-4, and human user performance on a practice ophthalmology written examination’ and ‘ChatGPT in ophthalmology: the dawn of a new era?’
    Nima Ghadiri
    Eye, 2024, 38 : 654 - 655
  • [50] Enhancing systematic reviews in orthodontics: a comparative examination of GPT-3.5 and GPT-4 for generating PICO-based queries with tailored prompts and configurations
    Demir, Gizem Boztas
    Sukut, Yagizalp
    Duran, Goekhan Serhat
    Topsakal, Kubra Gulnur
    Gorgulu, Serkan
    EUROPEAN JOURNAL OF ORTHODONTICS, 2024, 46 (02)