Question Answering based Clinical Text Structuring Using Pre-trained Language Model

被引:0
|
作者
Qiu, Jiahui [1 ]
Zhou, Yangming [1 ]
Ma, Zhiyuan [1 ]
Ruan, Tong [1 ]
Liu, Jinlin [1 ]
Sun, Jing [2 ]
机构
[1] East China Univ Sci & Technol, Sch Informat Sci & Engn, Shanghai 200237, Peoples R China
[2] Shanghai Jiao Tong Univ, Ruijin Hosp, Sch Med, Shanghai 200025, Peoples R China
来源
2019 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE (BIBM) | 2019年
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Question answering; Clinical text structuring; Pre-trained language model; Electronic health records;
D O I
10.1109/bibm47256.2019.8983142
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Clinical text structuring is a critical and fundamental task for clinical research. Traditional methods such as task-specific end-to-end models and pipeline models usually suffer from the lack of dataset and error propagation. In this paper, we present a question answering based clinical text structuring (QA-CTS) task to unify different specific CTS tasks and make dataset shareable. A novel model that aims to introduce domain-specific features (e.g., clinical named entity information) into pre-trained language model is also proposed for QA-CTS task. Experimental results on Chinese pathology reports collected from Ruijing Hospital demonstrate our presented QA-CTS task is very effective to improve the performance on specific tasks. Our proposed model also competes favorably with strong baseline models in specific tasks.
引用
收藏
页码:1596 / 1600
页数:5
相关论文
共 50 条
  • [1] Pre-trained Language Model for Biomedical Question Answering
    Yoon, Wonjin
    Lee, Jinhyuk
    Kim, Donghyeon
    Jeong, Minbyul
    Kang, Jaewoo
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2019, PT II, 2020, 1168 : 727 - 740
  • [2] A Pre-trained Language Model for Medical Question Answering Based on Domain Adaption
    Liu, Lang
    Ren, Junxiang
    Wu, Yuejiao
    Song, Ruilin
    Cheng, Zhen
    Wang, Sibo
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, NLPCC 2022, PT II, 2022, 13552 : 216 - 227
  • [3] Question-answering Forestry Pre-trained Language Model: ForestBERT
    Tan, Jingwei
    Zhang, Huaiqing
    Liu, Yang
    Yang, Jie
    Zheng, Dongping
    Linye Kexue/Scientia Silvae Sinicae, 2024, 60 (09): : 99 - 110
  • [4] Improving Visual Question Answering with Pre-trained Language Modeling
    Wu, Yue
    Gao, Huiyi
    Chen, Lei
    FIFTH INTERNATIONAL WORKSHOP ON PATTERN RECOGNITION, 2020, 11526
  • [5] A survey of text classification based on pre-trained language model
    Wu, Yujia
    Wan, Jun
    NEUROCOMPUTING, 2025, 616
  • [6] Multi-Hop Knowledge Base Question Answering with Pre-Trained Language Model Feature Enhancement
    Wei, Qianqiang
    Zhao, Shuliang
    Lu, Danqi
    Jia, Xiaowen
    Yang, Shilong
    Computer Engineering and Applications, 2024, 60 (22) : 184 - 196
  • [7] An empirical study of pre-trained language models in simple knowledge graph question answering
    Hu, Nan
    Wu, Yike
    Qi, Guilin
    Min, Dehai
    Chen, Jiaoyan
    Pan, Jeff Z.
    Ali, Zafar
    WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2023, 26 (05): : 2855 - 2886
  • [8] An empirical study of pre-trained language models in simple knowledge graph question answering
    Nan Hu
    Yike Wu
    Guilin Qi
    Dehai Min
    Jiaoyan Chen
    Jeff Z Pan
    Zafar Ali
    World Wide Web, 2023, 26 : 2855 - 2886
  • [9] Compressing and Debiasing Vision-Language Pre-Trained Models for Visual Question Answering
    Si, Qingyi
    Liu, Yuanxin
    Lin, Zheng
    Fu, Peng
    Cao, Yanan
    Wang, Weiping
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 513 - 529
  • [10] ReLMKG: reasoning with pre-trained language models and knowledge graphs for complex question answering
    Xing Cao
    Yun Liu
    Applied Intelligence, 2023, 53 : 12032 - 12046