LLMs Accelerate Annotation for Medical Information Extraction

被引:0
|
作者
Goel, Akshay [1 ]
Gueta, Almog [1 ]
Gilon, Omry [1 ]
Liu, Chang [1 ]
Erell, Sofia [1 ]
Lan Huong Nguyen [1 ]
Hao, Xiaohong [1 ]
Jaber, Bolous [1 ]
Reddy, Shashir [1 ]
Kartha, Rupesh [1 ]
Steiner, Jean [1 ]
Laish, Itay [1 ]
Feder, Amir [1 ]
机构
[1] Google Res, Mountain View, CA 94035 USA
关键词
Medical NLP; Large Language Models; Data Annotation; ELECTRONIC HEALTH RECORDS; TEXT;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The unstructured nature of clinical notes within electronic health records often conceals vital patient-related information, making it challenging to access or interpret. To uncover this hidden information, specialized Natural Language Processing (NLP) models are required. However, training these models necessitates large amounts of labeled data, a process that is both time-consuming and costly when relying solely on human experts for annotation. In this paper, we propose an approach that combines Large Language Models (LLMs) with human expertise to create an efficient method for generating ground truth labels for medical text annotation. By utilizing LLMs in conjunction with human annotators, we significantly reduce the human annotation burden, enabling the rapid creation of labeled datasets. We rigorously evaluate our method on a medical information extraction task, demonstrating that our approach not only substantially cuts down on human intervention but also maintains high accuracy. The results highlight the potential of using LLMs to improve the utilization of unstructured clinical data, allowing for the swift deployment of tailored NLP solutions in healthcare.
引用
收藏
页码:82 / 100
页数:19
相关论文
共 50 条
  • [21] Information Extraction in the Medical Domain
    Ghoulam, Aicha
    Barigou, Fatiha
    Belalem, Ghalem
    JOURNAL OF INFORMATION TECHNOLOGY RESEARCH, 2015, 8 (02) : 1 - 15
  • [22] A step-by-step method for cultural annotation by LLMs
    Dubourg, Edgar
    Thouzeau, Valentin
    Baumard, Nicolas
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2024, 7
  • [23] Semantic information extraction in medical information systems
    Holzinger, Andreas
    Geierhofer, Regina
    Errath, Maximilian
    Informatik-Spektrum, 2007, 30 (02) : 69 - 78
  • [24] Medical Entity and Attributes Extraction System Based on Relation Annotation
    ZOU Yuwei
    GU Jinguang
    FU Haidong
    Wuhan University Journal of Natural Sciences, 2016, 21 (02) : 145 - 150
  • [25] Tag Dictionaries Accelerate Manual Annotation
    Carmen, Marc
    Felt, Paul
    Haertel, Robbie
    Lonsdale, Deryle
    McClanahan, Peter
    Merkling, Owen
    Ringger, Eric
    Seppi, Kevin
    LREC 2010 - SEVENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2010, : 1660 - 1664
  • [26] Learning to Filter Documents for Information Extraction using Rapid Annotation
    Aguirre, Carlos A.
    Gullapalli, Sneha
    De La Torre, Maria F.
    Lam, Alice
    Weese, Joshua Levi
    Hsu, William H.
    2017 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND DATA SCIENCE (MLDS 2017), 2017, : 85 - 90
  • [27] A Novel Evaluation Framework for Medical LLMs: Combining Fuzzy Logic and MCDM for Medical Relation and Clinical Concept Extraction
    Alamoodi, A. H.
    Zughoul, Omar
    David, Dianese
    Garfan, Salem
    Pamucar, Dragan
    Albahri, O. S.
    Albahri, A. S.
    Yussof, Salman
    Sharaf, Iman Mohamad
    JOURNAL OF MEDICAL SYSTEMS, 2024, 48 (01)
  • [28] From Text to Emotion: Unveiling the Emotion Annotation Capabilities of LLMs
    Niu, Minxue
    Jaiswal, Mimansa
    Provost, Emily Mower
    INTERSPEECH 2024, 2024, : 2650 - 2654
  • [29] Architecture of a medical information extraction system
    Bekhouche, D
    Pollet, Y
    Grilheres, B
    Denis, X
    NATURAL LANGUAGE PROCESSING AND INFORMATION SYSTEMS, 2004, 3136 : 380 - 387
  • [30] The Promise and Challenges of Using LLMs to Accelerate the Screening Process of Systematic Reviews
    Huotala, Aleksi
    Kuutila, Miikka
    Ralph, Paul
    Mantyla, Mika
    PROCEEDINGS OF 2024 28TH INTERNATION CONFERENCE ON EVALUATION AND ASSESSMENT IN SOFTWARE ENGINEERING, EASE 2024, 2024, : 262 - 271