Leveraging pretrained language models for seizure frequency extraction from epilepsy evaluation reports

被引:0
|
作者
Rashmie Abeysinghe [1 ]
Shiqiang Tao [2 ]
Samden D. Lhatoo [1 ]
Guo-Qiang Zhang [2 ]
Licong Cui [1 ]
机构
[1] The University of Texas Health Science Center at Houston,Department of Neurology, McGovern Medical School
[2] The University of Texas Health Science Center at Houston,Texas Institute for Restorative Neurotechnologies
[3] The University of Texas Health Science Center at Houston,McWilliams School of Biomedical Informatics
关键词
D O I
10.1038/s41746-025-01592-4
中图分类号
学科分类号
摘要
Seizure frequency is essential for evaluating epilepsy treatment, ensuring patient safety, and reducing risk for Sudden Unexpected Death in Epilepsy. As this information is often described in clinical narratives, this study presents an approach to extracting structured seizure frequency details from such unstructured text. We investigated two tasks: (1) extracting phrases describing seizure frequency, and (2) extracting seizure frequency attributes. For both tasks, we fine-tuned three BERT-based models (bert-large-cased, biobert-large-cased, and Bio_ClinicalBERT), as well as three generative large language models (GPT-4, GPT-3.5 Turbo, and Llama-2-70b-hf). The final structured output integrated the results from both tasks. GPT-4 attained the best performance across all tasks with precision, recall, and F1-score of 86.61%, 85.04%, and 85.79% respectively for frequency phrase extraction; 90.23%, 93.51%, and 91.84% for seizure frequency attribute extraction; and 86.64%, 85.06%, and 85.82% for the final structured output. These findings highlight the potential of fine-tuned generative models in extractive tasks from limited text strings.
引用
收藏
相关论文
共 50 条
  • [41] TM Scanlon at SemEval-2023 Task 4: Leveraging Pretrained Language Models for Human Value Argument Mining with Contrastive Learning
    Oskuee, Milad Molazadeh
    Rahgouy, Mostafa
    Giglou, Hamed Babaei
    Seals, Cheryl D.
    17TH INTERNATIONAL WORKSHOP ON SEMANTIC EVALUATION, SEMEVAL-2023, 2023, : 603 - 608
  • [42] Seizure Suppression by High Frequency Optogenetic Stimulation Using In Vitro and In Vivo Animal Models of Epilepsy
    Chiang, Chia-Chu
    Ladas, Thomas P.
    Gonzalez-Reyes, Luis E.
    Durand, Dominique M.
    BRAIN STIMULATION, 2014, 7 (06) : 890 - 899
  • [43] Exploring the Impact of Pretrained Models and Web-Scraped Data for the 2022 NIST Language Recognition Evaluation
    Alumae, Tanel
    Kukk, Kunnar
    Le, Viet-Bac
    Barras, Claude
    Messaoudi, Abdel
    Ben Kheder, Waad
    INTERSPEECH 2023, 2023, : 516 - 520
  • [44] Evaluation of the pentylenetetrazole seizure threshold test and the maximal electroshock seizure threshold test in epileptic mice as models for pharmacoresistant epilepsy
    Twele, Friederike
    Toellner, Kathrin
    Brandt, Claudia
    Loescher, Wolfgang
    NAUNYN-SCHMIEDEBERGS ARCHIVES OF PHARMACOLOGY, 2014, 387 : S98 - S98
  • [45] LLM4Jobs: Unsupervised occupation extraction and standardization leveraging Large Language Models
    Li, Nan
    Kang, Bo
    De Bie, Tijl
    KNOWLEDGE-BASED SYSTEMS, 2025, 316
  • [46] Leveraging encoder-only large language models for mobile app review feature extraction
    Quim Motger
    Alessio Miaschi
    Felice Dell’Orletta
    Xavier Franch
    Jordi Marco
    Empirical Software Engineering, 2025, 30 (3)
  • [47] Leveraging Medical Knowledge Graphs and Large Language Models for Enhanced Mental Disorder Information Extraction
    Park, Chaelim
    Lee, Hayoung
    Jeong, Ok-ran
    FUTURE INTERNET, 2024, 16 (08)
  • [48] Leveraging Fuzzy Fingerprints from Large Language Models for Authorship Attribution
    Ribeiro, Rui
    Carvalho, Joao P.
    Coheur, Luisa
    2024 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS, FUZZ-IEEE 2024, 2024,
  • [49] Transmembrane protein modulates seizure in epilepsy: evidence from temporal lobe epilepsy patients and mouse models
    Zhang, Haiqing
    Zhou, Zunlin
    Qin, Jiyao
    Yang, Juan
    Huang, Hao
    Yang, Xiaoyan
    Luo, Zhong
    Zheng, Yongsu
    Peng, Yan
    Chen, Ya
    Xu, Zucai
    EXPERIMENTAL ANIMALS, 2024, 73 (02) : 162 - 174
  • [50] Can Prompt Probe Pretrained Language Models? Understanding the Invisible Risks from a Causal View
    Cao, Boxi
    Lin, Hongyu
    Han, Xianpei
    Liu, Fangchao
    Sun, Le
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 5796 - 5808