The Fine-Tuned Large Language Model for Extracting the Progressive Bone Metastasis from Unstructured Radiology Reports

被引:2
|
作者
Kanemaru, Noriko [1 ]
Yasaka, Koichiro [1 ]
Fujita, Nana [1 ]
Kanzawa, Jun [1 ]
Abe, Osamu [1 ]
机构
[1] Univ Tokyo, Grad Sch Med, Dept Radiol, 7-3-1 Hongo Bunkyo-Ku, Tokyo 1138655, Japan
来源
关键词
Large language model; Bone metastasis; Deep learning; DISEASE;
D O I
10.1007/s10278-024-01242-3
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Early detection of patients with impending bone metastasis is crucial for prognosis improvement. This study aimed to investigate the feasibility of a fine-tuned, locally run large language model (LLM) in extracting patients with bone metastasis in unstructured Japanese radiology report and to compare its performance with manual annotation. This retrospective study included patients with "metastasis" in radiological reports (April 2018-January 2019, August-May 2022, and April-December 2023 for training, validation, and test datasets of 9559, 1498, and 7399 patients, respectively). Radiologists reviewed the clinical indication and diagnosis sections of the radiological report (used as input data) and classified them into groups 0 (no bone metastasis), 1 (progressive bone metastasis), and 2 (stable or decreased bone metastasis). The data for group 0 was under-sampled in training and test datasets due to group imbalance. The best-performing model from the validation set was subsequently tested using the testing dataset. Two additional radiologists (readers 1 and 2) were involved in classifying radiological reports within the test dataset for testing purposes. The fine-tuned LLM, reader 1, and reader 2 demonstrated an accuracy of 0.979, 0.996, and 0.993, sensitivity for groups 0/1/2 of 0.988/0.947/0.943, 1.000/1.000/0.966, and 1.000/0.982/0.954, and time required for classification (s) of 105, 2312, and 3094 in under-sampled test dataset (n = 711), respectively. Fine-tuned LLM extracted patients with bone metastasis, demonstrating satisfactory performance that was comparable to or slightly lower than manual annotation by radiologists in a noticeably shorter time.
引用
收藏
页码:865 / 872
页数:8
相关论文
共 50 条
  • [41] Enhancing Solution Diversity in Arithmetic Problems using Fine-Tuned AI Language Model
    Lee, Chang-Yu
    Lai, I-Wei
    2024 11TH INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS-TAIWAN, ICCE-TAIWAN 2024, 2024, : 515 - 516
  • [42] Comparing Fine-Tuned Transformers and Large Language Models for Sales Call Classification: A Case Study
    Eisenstadt, Roy
    Asi, Abedelkader
    Ronen, Royi
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 5240 - 5241
  • [43] RankMean: Module-Level Importance Score for Merging Fine-tuned Large Language Models
    Perin, Gabriel J.
    Chen, Xuxi
    Liu, Shusen
    Kailkhura, Bhavya
    Wang, Zhangyang
    Gallagher, Brian
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 1776 - 1782
  • [44] Constructing a Large Language Model to Generate Impressions from Findings in Radiology Reports
    Zhang, Lu
    Liu, Mingqian
    Wang, Lingyun
    Zhang, Yaping
    Xu, Xiangjun
    Pan, Zhijun
    Feng, Yan
    Zhao, Jue
    Zhang, Lin
    Yao, Gehong
    Chen, Xu
    Xie, Xueqian
    RADIOLOGY, 2024, 312 (03)
  • [45] ChatDoctor: A Medical Chat Model Fine-Tuned on a Large Language Model Meta-AI (LLaMA) Using Medical Domain Knowledge
    Li, Yunxiang
    Li, Zihan
    Zhang, Kai
    Dan, Ruilong
    Jiang, Steve
    Zhang, You
    CUREUS JOURNAL OF MEDICAL SCIENCE, 2023, 15 (06)
  • [46] A Large Language Model to Detect Negated Expressions in Radiology Reports
    Su, Yvonne
    Babore, Yonatan B.
    Kahn Jr, Charles E.
    JOURNAL OF IMAGING INFORMATICS IN MEDICINE, 2024,
  • [47] An open-source fine-tuned large language model for radiological impression generation: a multi-reader performance study
    Serapio, Adrian
    Chaudhari, Gunvant
    Savage, Cody
    Lee, Yoo Jin
    Vella, Maya
    Sridhar, Shravan
    Schroeder, Jamie Lee
    Liu, Jonathan
    Yala, Adam
    Sohn, Jae Ho
    BMC MEDICAL IMAGING, 2024, 24 (01):
  • [48] Assessment of fine-tuned large language models for real-world chemistry and material science applications
    Van Herck, Joren
    Gil, Maria Victoria
    Jablonka, Kevin Maik
    Abrudan, Alex
    Anker, Andy S.
    Asgari, Mehrdad
    Blaiszik, Ben
    Buffo, Antonio
    Choudhury, Leander
    Corminboeuf, Clemence
    Daglar, Hilal
    Elahi, Amir Mohammad
    Foster, Ian T.
    Garcia, Susana
    Garvin, Matthew
    Godin, Guillaume
    Good, Lydia L.
    Gu, Jianan
    Xiao Hu, Noemie
    Jin, Xin
    Junkers, Tanja
    Keskin, Seda
    Knowles, Tuomas P. J.
    Laplaza, Ruben
    Lessona, Michele
    Majumdar, Sauradeep
    Mashhadimoslem, Hossein
    Mcintosh, Ruaraidh D.
    Moosavi, Seyed Mohamad
    Mourino, Beatriz
    Nerli, Francesca
    Pevida, Covadonga
    Poudineh, Neda
    Rajabi-Kochi, Mahyar
    Saar, Kadi L.
    Hooriabad Saboor, Fahimeh
    Sagharichiha, Morteza
    Schmidt, K. J.
    Shi, Jiale
    Simone, Elena
    Svatunek, Dennis
    Taddei, Marco
    Tetko, Igor
    Tolnai, Domonkos
    Vahdatifar, Sahar
    Whitmer, Jonathan
    Wieland, D. C. Florian
    Willumeit-Roemer, Regine
    Zuttel, Andreas
    Smit, Berend
    CHEMICAL SCIENCE, 2025, 16 (02) : 670 - 684
  • [49] Enhancing Zero-Shot Crypto Sentiment With Fine-Tuned Language Model and Prompt Engineering
    Wahidur, Rahman S. M.
    Tashdeed, Ishmam
    Kaur, Manjit
    Lee, Heung-No
    IEEE ACCESS, 2024, 12 : 10146 - 10159
  • [50] Development of Fine-Tuned Retrieval Augmented Language Model specialized to manual books on machine tools
    Cho, Seongwoo
    Park, Jongsu
    Urn, Jumyung
    IFAC PAPERSONLINE, 2024, 58 (19): : 187 - 192