Zero-shot interpretable phenotyping of postpartum hemorrhage using large language models

被引:15
|
作者
Alsentzer, Emily [1 ]
Rasmussen, Matthew J. [2 ]
Fontoura, Romy [2 ]
Cull, Alexis L. [2 ]
Beaulieu-Jones, Brett [3 ]
Gray, Kathryn J. [4 ,5 ]
Bates, David W. [1 ,6 ]
Kovacheva, Vesela P. [2 ]
机构
[1] Brigham & Womens Hosp, Div Gen Internal Med & Primary Care, Boston, MA USA
[2] Brigham & Womens Hosp, Dept Anesthesiol Perioperat & Pain Med, Boston, MA 02115 USA
[3] Univ Chicago, Dept Med, Sect Biomed Data Sci, Chicago, IL USA
[4] Massachusetts Gen Hosp, Ctr Genom Med, Boston, MA USA
[5] Brigham & Womens Hosp, Div Maternal Fetal Med, Boston, MA USA
[6] Harvard TH Chan Sch Publ Hlth, Dept Hlth Care Policy & Management, Boston, MA USA
关键词
CLASSIFICATION; ALGORITHMS;
D O I
10.1038/s41746-023-00957-x
中图分类号
R19 [保健组织与事业(卫生事业管理)];
学科分类号
摘要
Many areas of medicine would benefit from deeper, more accurate phenotyping, but there are limited approaches for phenotyping using clinical notes without substantial annotated data. Large language models (LLMs) have demonstrated immense potential to adapt to novel tasks with no additional training by specifying task-specific instructions. Here we report the performance of a publicly available LLM, Flan-T5, in phenotyping patients with postpartum hemorrhage (PPH) using discharge notes from electronic health records (n = 271,081). The language model achieves strong performance in extracting 24 granular concepts associated with PPH. Identifying these granular concepts accurately allows the development of interpretable, complex phenotypes and subtypes. The Flan-T5 model achieves high fidelity in phenotyping PPH (positive predictive value of 0.95), identifying 47% more patients with this complication compared to the current standard of using claims codes. This LLM pipeline can be used reliably for subtyping PPH and outperforms a claims-based approach on the three most common PPH subtypes associated with uterine atony, abnormal placentation, and obstetric trauma. The advantage of this approach to subtyping is its interpretability, as each concept contributing to the subtype determination can be evaluated. Moreover, as definitions may change over time due to new guidelines, using granular concepts to create complex phenotypes enables prompt and efficient updating of the algorithm. Using this language modelling approach enables rapid phenotyping without the need for any manually annotated training data across multiple clinical use cases.
引用
收藏
页数:10
相关论文
共 50 条
  • [41] Diff-ZsVQA: Zero-shot Visual Question Answering with Frozen Large Language Models Using Diffusion Model
    Xu, Quanxing
    Li, Jian
    Tian, Yuhao
    Zhou, Ling
    Zhang, Feifei
    Huang, Rubing
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 275
  • [42] Large Language Model Ranker with Graph Reasoning for Zero-Shot Recommendation
    Zhang, Xuan
    Wei, Chunyu
    Yan, Ruyu
    Fan, Yushun
    Jia, Zhixuan
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT V, 2024, 15020 : 356 - 370
  • [43] Zero-Shot Translation of Attention Patterns in VQA Models to Natural Language
    Salewski, Leonard
    Koepke, A. Sophia
    Lensch, Hendrik P. A.
    Akata, Zeynep
    PATTERN RECOGNITION, DAGM GCPR 2023, 2024, 14264 : 378 - 393
  • [44] Label Propagation for Zero-shot Classification with Vision-Language Models
    Stojnic, Vladan
    Kalantidis, Yannis
    Tolias, Giorgos
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 23209 - 23218
  • [45] JOINT MUSIC AND LANGUAGE ATTENTION MODELS FOR ZERO-SHOT MUSIC TAGGING
    Du, Xingjian
    Yu, Zhesong
    Lin, Jiaju
    Zhu, Bilei
    Kong, Qiuqiang
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 1126 - 1130
  • [46] Zero-Shot Cross-Lingual Reranking with Large Language Models for Low-Resource Languages
    Adeyemi, Mofetoluwa
    Oladipo, Akintunde
    Pradeep, Ronak
    Lin, Jimmy
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 2: SHORT PAPERS, 2024, : 650 - 656
  • [47] Improving Zero-shot Visual Question Answering via Large Language Models with Reasoning Question Prompts
    Lan, Yunshi
    Li, Xiang
    Liu, Xin
    Li, Yang
    Qin, Wei
    Qian, Weining
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 4389 - 4400
  • [48] Semantic Parsing by Large Language Models for Intricate Updating Strategies of Zero-Shot Dialogue State Tracking
    Wu, Yuxiang
    Dong, Guanting
    Xu, Weiran
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 11093 - 11099
  • [49] Zero-shot Visual Relation Detection via Composite Visual Cues from Large Language Models
    Li, Lin
    Xiao, Jun
    Chen, Guikun
    Shao, Jian
    Zhuang, Yueting
    Chen, Long
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [50] From Images to Textual Prompts: Zero-shot Visual Question Answering with Frozen Large Language Models
    Guo, Jiaxian
    Li, Junnan
    Li, Dongxu
    Tiong, Anthony Meng Huat
    Li, Boyang
    Tao, Dacheng
    Hoi, Steven
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 10867 - 10877