BI-RADS Category Assignments by GPT-3.5, GPT-4, and Google Bard: A Multilanguage Study

被引:16
|
作者
Cozzi, Andrea [1 ]
Pinker, Katja [2 ]
Hidber, Andri [3 ]
Zhang, Tianyu [4 ,5 ,6 ]
Bonomo, Luca [1 ]
Lo Gullo, Roberto [2 ,4 ]
Christianson, Blake [2 ]
Curti, Marco [1 ]
Rizzo, Stefania [1 ,3 ]
Del Grande, Filippo [1 ,3 ]
Mann, Ritse M. [4 ,5 ]
Schiaffino, Simone [1 ,3 ]
机构
[1] Ente Osped Cantonale, Imaging Inst Southern Switzerland IIMSI, ViaTesserete 46, CH-6900 Lugano, Switzerland
[2] Mem Sloan Kettering Canc Ctr, Dept Radiol, Breast Imaging Serv, New York, NY USA
[3] Univ Svizzera italiana, Fac Biomed Sci, Lugano, Switzerland
[4] Netherlands Canc Inst, Dept Radiol, Amsterdam, Netherlands
[5] Radboud Univ Nijmegen, Dept Diagnost Imaging, Med Ctr, NL-6500 HB Nijmegen, Netherlands
[6] Maastricht Univ, GROW Res Inst Oncol & Reprod, Maastricht, Netherlands
关键词
INTEROBSERVER VARIABILITY; AGREEMENT; RELIABILITY;
D O I
10.1148/radiol.232133
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Background: The performance of publicly available large language models (LLMs) remains unclear for complex clinical tasks. Purpose: To evaluate the agreement between human readers and LLMs for Breast Imaging Reporting and Data System (BI-RADS) categories assigned based on breast imaging reports written in three languages and to assess the impact of discordant category assignments on clinical management. Materials and Methods: This retrospective study included reports for women who underwent MRI, mammography, and/or US breast cancer screening or diagnostic purposes at three referral centers. Reports with findings categorized as BI-RADS 1-5 and in Italian, English, or Dutch were collected between January 2000 and October 2023. Board -certified breast radiologists and LLMs GPT-3.5 and GPT-4 (OpenAI) and Bard, now called Gemini (Google), assigned BI-RADS categories using only the described by the original radiologists. Agreement between human readers and LLMs for BI-RADS categories was assessed using Gwet agreement coefficient (AC1 value). Frequencies were calculated for changes in BI-RADS category assignments that would clinical management (ie, BI-RADS 0 vs BI-RADS 1 or 2 vs BI-RADS 3 vs BI-RADS 4 or 5) and compared using the McNemar test. Results: Across 2400 reports, agreement between the original and reviewing radiologists was almost perfect (AC1 = 0.91), while agreement between the original radiologists and GPT-4, GPT-3.5, and Bard was moderate (AC1 = 0.52, 0.48, and 0.42, respectively). Across human readers and LLMs, differences were observed in the frequency of BI-RADS category upgrades or downgrades that result in changed clinical management (118 of 2400 [4.9%] for human readers, 611 of 2400 [25.5%] for Bard, 573 of 2400 for GPT-3.5, and 435 of 2400 [18.1%] for GPT-4; P < .001) and that would negatively impact clinical management (37 of 2400 [1.5%] for human readers, 435 of 2400 [18.1%] for Bard, 344 of 2400 [14.3%] for GPT-3.5, and 255 of 2400 [10.6%] for P < .001). Conclusion: LLMs achieved moderate agreement with human reader-assigned BI-RADS categories across reports written in three languages but also yielded a high percentage of discordant BI-RADS categories that would negatively impact clinical management.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] The Performance of GPT-3.5, GPT-4, and Bard on the Japanese National Dentist Examination: A Comparison Study
    Ohta, Keiichi
    Ohta, Satomi
    CUREUS JOURNAL OF MEDICAL SCIENCE, 2023, 15 (12)
  • [2] GPT-4 in Nuclear Medicine Education: Does It Outperform GPT-3.5?
    Currie, Geoffrey M.
    JOURNAL OF NUCLEAR MEDICINE TECHNOLOGY, 2023, 51 (04) : 314 - 317
  • [3] Correspondence on Chat GPT-4, GPT-3.5 and drug information queries
    Kleebayoon, Amnuay
    Wiwanitkit, Viroj
    JOURNAL OF TELEMEDICINE AND TELECARE, 2023,
  • [4] Performance of GPT-3.5 and GPT-4 on the Korean Pharmacist Licensing Examination: Comparison Study
    Jin, Hye Kyung
    Kim, Eunyoung
    JMIR MEDICAL EDUCATION, 2024, 10
  • [5] Performance of GPT-3.5 and GPT-4 on the Japanese Medical Licensing Examination: Comparison Study
    Takagi, Soshi
    Watari, Takashi
    Erabi, Ayano
    Sakaguchi, Kota
    JMIR MEDICAL EDUCATION, 2023, 9
  • [6] Evaluating Large Language Models for the National Premedical Exam in India: Comparative Analysis of GPT-3.5, GPT-4, and Bard
    Farhat, Faiza
    Chaudhry, Beenish Moalla
    Nadeem, Mohammad
    Sohail, Shahab Saquib
    Madsen, Dag Oivind
    JMIR MEDICAL EDUCATION, 2024, 10
  • [7] ChatGPT and Patient Information in Nuclear Medicine: GPT-3.5 Versus GPT-4
    Currie, Geoff
    Robbie, Stephanie
    Tually, Peter
    JOURNAL OF NUCLEAR MEDICINE TECHNOLOGY, 2023, 51 (04) : 307 - 313
  • [8] Assessing the Performance of GPT-3.5 and GPT-4 on the 2023 Japanese Nursing Examination
    Kaneda, Yudai
    Takahashi, Ryo
    Kaneda, Uiri
    Akashima, Shiori
    Okita, Haruna
    Misaki, Sadaya
    Yamashiro, Akimi
    Ozaki, Akihiko
    Tanimoto, Tetsuya
    CUREUS JOURNAL OF MEDICAL SCIENCE, 2023, 15 (08)
  • [9] Evaluation of the performance of GPT-3.5 and GPT-4 on the Polish Medical Final Examination
    Maciej Rosoł
    Jakub S. Gąsior
    Jonasz Łaba
    Kacper Korzeniewski
    Marcel Młyńczak
    Scientific Reports, 13
  • [10] Chat GPT-4 significantly surpasses GPT-3.5 in drug information queries
    He, Na
    Yan, Yingying
    Wu, Ziyang
    Cheng, Yinchu
    Liu, Fang
    Li, Xiaotong
    Zhai, Suodi
    JOURNAL OF TELEMEDICINE AND TELECARE, 2025, 31 (02) : 306 - 308