The Fine-Tuned Large Language Model for Extracting the Progressive Bone Metastasis from Unstructured Radiology Reports

被引:2
|
作者
Kanemaru, Noriko [1 ]
Yasaka, Koichiro [1 ]
Fujita, Nana [1 ]
Kanzawa, Jun [1 ]
Abe, Osamu [1 ]
机构
[1] Univ Tokyo, Grad Sch Med, Dept Radiol, 7-3-1 Hongo Bunkyo-Ku, Tokyo 1138655, Japan
来源
关键词
Large language model; Bone metastasis; Deep learning; DISEASE;
D O I
10.1007/s10278-024-01242-3
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Early detection of patients with impending bone metastasis is crucial for prognosis improvement. This study aimed to investigate the feasibility of a fine-tuned, locally run large language model (LLM) in extracting patients with bone metastasis in unstructured Japanese radiology report and to compare its performance with manual annotation. This retrospective study included patients with "metastasis" in radiological reports (April 2018-January 2019, August-May 2022, and April-December 2023 for training, validation, and test datasets of 9559, 1498, and 7399 patients, respectively). Radiologists reviewed the clinical indication and diagnosis sections of the radiological report (used as input data) and classified them into groups 0 (no bone metastasis), 1 (progressive bone metastasis), and 2 (stable or decreased bone metastasis). The data for group 0 was under-sampled in training and test datasets due to group imbalance. The best-performing model from the validation set was subsequently tested using the testing dataset. Two additional radiologists (readers 1 and 2) were involved in classifying radiological reports within the test dataset for testing purposes. The fine-tuned LLM, reader 1, and reader 2 demonstrated an accuracy of 0.979, 0.996, and 0.993, sensitivity for groups 0/1/2 of 0.988/0.947/0.943, 1.000/1.000/0.966, and 1.000/0.982/0.954, and time required for classification (s) of 105, 2312, and 3094 in under-sampled test dataset (n = 711), respectively. Fine-tuned LLM extracted patients with bone metastasis, demonstrating satisfactory performance that was comparable to or slightly lower than manual annotation by radiologists in a noticeably shorter time.
引用
收藏
页码:865 / 872
页数:8
相关论文
共 50 条
  • [21] Fine-Tuned BERT Model for Large Scale and Cognitive Classification of MOOCs
    Sebbaq, Hanane
    El Faddouli, Nour-eddine
    INTERNATIONAL REVIEW OF RESEARCH IN OPEN AND DISTRIBUTED LEARNING, 2022, 23 (02): : 170 - 190
  • [22] Performance of an Open-Source Large Language Model in Extracting Information from Free-Text Radiology Reports
    Le Guellec, Bastien
    Lefevre, Alexandre
    Geay, Charlotte
    Shorten, Lucas
    Bruge, Cyril
    Hacein-Bey, Lotfi
    Amouyel, Philippe
    Pruvo, Jean-Pierre
    Kuchcinski, Gregory
    Hamroun, Aghiles
    RADIOLOGY-ARTIFICIAL INTELLIGENCE, 2024, 6 (04)
  • [23] AirBERT: A fine-tuned language representation model for airlines tweet sentiment analysis
    Yenkikar, Anuradha
    Babu, C. Narendra
    INTELLIGENT DECISION TECHNOLOGIES-NETHERLANDS, 2023, 17 (02): : 435 - 455
  • [24] Extracting text from scanned Arabic books: a large-scale benchmark dataset and a fine-tuned Faster-R-CNN model
    Elanwar, Randa
    Qin, Wenda
    Betke, Margrit
    Wijaya, Derry
    INTERNATIONAL JOURNAL ON DOCUMENT ANALYSIS AND RECOGNITION, 2021, 24 (04) : 349 - 362
  • [25] Extracting text from scanned Arabic books: a large-scale benchmark dataset and a fine-tuned Faster-R-CNN model
    Randa Elanwar
    Wenda Qin
    Margrit Betke
    Derry Wijaya
    International Journal on Document Analysis and Recognition (IJDAR), 2021, 24 : 349 - 362
  • [26] CARDIAC ARREST PREDICTION IN THE PEDIATRIC CICU: A FINE-TUNED LANGUAGE MODEL APPROACH
    Lu, Jiaying
    Brown, Stephanie
    Dong, Kejun
    Bold, Del
    Fundora, Michael
    Grunwell, Jocelyn
    Hu, Xiao
    CRITICAL CARE MEDICINE, 2025, 53 (01)
  • [27] Automated Smart Contract Vulnerability Detection using Fine-tuned Large Language Models
    Yang, Zhiju
    Man, Gaoyuan
    Yue, Songqing
    6TH INTERNATIONAL CONFERENCE ON BLOCKCHAIN TECHNOLOGY AND APPLICATIONS, ICBTA 2023, 2023, : 19 - 23
  • [28] Differential Privacy to Mathematically Secure Fine-Tuned Large Language Models for Linguistic Steganography
    Coffey, Sean M.
    Catudal, Joseph W.
    Bastian, Nathaniel D.
    ASSURANCE AND SECURITY FOR AI-ENABLED SYSTEMS, 2024, 13054
  • [29] MoralBERT: A Fine-Tuned Language Model for Capturing Moral Values in Social Discussions
    Preniqi, Vjosa
    Ghinassi, Iacopo
    Ive, Julia
    Saitis, Charalampos
    Kalimeri, Kyriaki
    PROCEEDINGS OF THE 2024 INTERNATIONAL CONFERENCE ON INFORMATION TECHNOLOGY FOR SOCIAL GOOD, GOODIT 2024, 2024, : 433 - 442
  • [30] Generating Software Tests for Mobile Applications Using Fine-Tuned Large Language Models
    Hoffmann, Jacob
    Frister, Demian
    PROCEEDINGS OF THE 2024 IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATION OF SOFTWARE TEST, AST 2024, 2024, : 76 - 77