Comparing Commercial and Open-Source Large Language Models for Labeling Chest Radiograph Reports

被引:0
|
作者
Dorfner, Felix J. [1 ,2 ,3 ,4 ,5 ]
Juergensen, Liv [3 ,5 ,6 ]
Donle, Leonhard [3 ]
Al Mohamad, Fares [3 ,5 ]
Bodenmann, Tobias R. [1 ,2 ,4 ]
Cleveland, Mason C. [1 ,2 ,4 ]
Busch, Felix [3 ,5 ]
Adams, Lisa C. [7 ]
Sato, James [8 ]
Schultz, Thomas [8 ]
Kim, Albert E. [1 ,2 ,4 ]
Merkow, Jameson [9 ]
Bressem, Keno K. [10 ,11 ,12 ]
Bridge, Christopher P. [1 ,2 ,4 ,5 ,8 ]
机构
[1] Massachusetts Gen Hosp, Athinoula A Martinos Ctr Biomed Imaging, 149 Thirteenth St, Charlestown, MA 02129 USA
[2] Harvard Med Sch, 149 Thirteenth St, Charlestown, MA 02129 USA
[3] Charite Univ Med Berlin, Dept Radiol, Berlin, Germany
[4] Free Univ Berlin, Berlin, Germany
[5] Humboldt Univ, Berlin, Germany
[6] Dana Farber Canc Inst, Dept Pediat Oncol, Boston, MA USA
[7] Tech Univ Munich, Dept Diagnost & Intervent Radiol, Munich, Germany
[8] Mass Gen Brigham Data Sci Off, Boston, MA USA
[9] Microsoft Hlth & Life Sci HLS, Redmond, WA 98052 USA
[10] Tech Univ Munich, Klinikum Rechts Isar, Munich, Germany
[11] German Heart Ctr Munich, Dept Radiol & Nucl Med, Munich, Germany
[12] Tech Univ Munich, TUM Univ Hosp, Sch Med & Hlth, Dept Cardiovasc Radiol & Nucl Med,German Heart Ctr, Munich, Germany
关键词
D O I
10.1148/radiol.241139
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Background: Rapid advances in large language models (LLMs) have led to the development of numerous commercial and open-source models. While recent publications have explored OpenAI's GPT-4 to extract information of interest from radiology reports, there has not been a real-world comparison of GPT-4 to leading open-source models. Purpose: To compare different leading open-source LLMs to GPT-4 on the task of extracting relevant findings from chest radiograph reports. Materials and Methods: Two independent datasets of free-text radiology reports from chest radiograph examinations were used in this retrospective study performed between February 2, 2024, and February 14, 2024. The first dataset consisted of reports from the ImaGenome dataset, providing reference standard annotations from the MIMIC-CXR database acquired between 2011 and 2016. The second dataset consisted of randomly selected reports created at the Massachusetts General Hospital between July 2019 and July 2021. In both datasets, the commercial models GPT-3.5 Turbo and GPT-4 were compared with open-source models that included Mistral-7B and Mixtral-8 x 7B (Mistral AI), Llama 2-13B and Llama 2-70B (Meta), and Qwen1.5-72B (Alibaba Group), as well as CheXbert and CheXpert-labeler (Stanford ML Group), in their ability to accurately label the presence of multiple findings in radiograph text reports using zero-shot and few-shot prompting. The McNemar test was used to compare F1 scores between models. Results: On the ImaGenome dataset (n = 450), the open-source model with the highest score, Llama 2-70B, achieved micro F1 scores of 0.97 and 0.97 for zero-shot and few-shot prompting, respectively, compared with the GPT-4 F1 scores of 0.98 and 0.98 (P > .99 and < .001 for superiority of GPT-4). On the institutional dataset (n = 500), the open-source model with the highest score, an ensemble model, achieved micro F1 scores of 0.96 and 0.97 for zero-shot and few-shot prompting, respectively, compared with the GPT-4 F1 scores of 0.98 and 0.97 (P < .001 and > .99 for superiority of GPT-4). Conclusion: Although GPT-4 was superior to open-source models in zero-shot report labeling, few-shot prompting with a small number of example reports closely matched the performance of GPT-4. The benefit of few-shot prompting varied across datasets and models.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Accessible Russian Large Language Models: Open-Source Models and Instructive Datasets for Commercial Applications
    Kosenko, D. P.
    Kuratov, Yu. M.
    Zharikova, D. R.
    DOKLADY MATHEMATICS, 2023, 108 (SUPPL 2) : S393 - S398
  • [2] Accessible Russian Large Language Models: Open-Source Models and Instructive Datasets for Commercial Applications
    D. P. Kosenko
    Yu. M. Kuratov
    D. R. Zharikova
    Doklady Mathematics, 2023, 108 : S393 - S398
  • [3] Automatic structuring of radiology reports with on-premise open-source large language models
    Woznicki, Piotr
    Laqua, Caroline
    Fiku, Ina
    Hekalo, Amar
    Truhn, Daniel
    Engelhardt, Sandy
    Kather, Jakob
    Foersch, Sebastian
    D'Antonoli, Tugba Akinci
    dos Santos, Daniel Pinto
    Baessler, Bettina
    Laqua, Fabian Christopher
    EUROPEAN RADIOLOGY, 2025, 35 (04) : 2018 - 2029
  • [4] Re: Open-Source Large Language Models in Radiology
    Kooraki, Soheil
    Bedayat, Arash
    ACADEMIC RADIOLOGY, 2024, 31 (10) : 4293 - 4293
  • [5] Servicing open-source large language models for oncology
    Ray, Partha Pratim
    ONCOLOGIST, 2024,
  • [6] A tutorial on open-source large language models for behavioral science
    Hussain, Zak
    Binz, Marcel
    Mata, Rui
    Wulff, Dirk U.
    BEHAVIOR RESEARCH METHODS, 2024, 56 (08) : 8214 - 8237
  • [7] Upgrading Academic Radiology with Open-Source Large Language Models
    Ray, Partha Pratim
    ACADEMIC RADIOLOGY, 2024, 31 (10) : 4291 - 4292
  • [8] Preliminary Systematic Review of Open-Source Large Language Models in Education
    Lin, Michael Pin-Chuan
    Chang, Daniel
    Hall, Sarah
    Jhajj, Gaganpreet
    GENERATIVE INTELLIGENCE AND INTELLIGENT TUTORING SYSTEMS, PT I, ITS 2024, 2024, 14798 : 68 - 77
  • [9] Classifying Cancer Stage with Open-Source Clinical Large Language Models
    Chang, Chia-Hsuan
    Lucas, Mary M.
    Lu-Yao, Grace
    Yang, Christopher C.
    2024 IEEE 12TH INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS, ICHI 2024, 2024, : 76 - 82
  • [10] Evaluating the Performance and Bias of Natural Language Processing Tools in Labeling Chest Radiograph Reports
    Santomartino, Samantha M.
    Zech, John R.
    Hall, Kent
    Jeudy, Jean
    Parekh, Vishwa
    Yi, Paul H.
    RADIOLOGY, 2024, 313 (01)