Fine-tuning BERT for Joint Entity and Relation Extraction in Chinese Medical Text

被引:0
|
作者
Xue, Kui [1 ]
Zhou, Yangming [1 ]
Ma, Zhiyuan [1 ]
Ruan, Tong [1 ]
Zhang, Huanhuan [1 ]
He, Ping [2 ]
机构
[1] East China Univ Sci & Technol, Sch Informat Sci & Engn, Shanghai 200237, Peoples R China
[2] Shanghai Hosp Dev Ctr, Shanghai 200041, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Named entity recognition; Relation classification; Joint model; BERT language model; Electronic health records;
D O I
暂无
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Entity and relation extraction is the necessary step in structuring medical text. However, the feature extraction ability of the bidirectional long short term memory network in the existing model does not achieve the best effect. At the same time, the language model has achieved excellent results in more and more natural language processing tasks. In this paper, we present a focused attention model for the joint entity and relation extraction task. Our model integrates well-known BERT language model into joint learning through dynamic range attention mechanism, thus improving the feature representation ability of shared parameter layer. Experimental results on coronary angiography texts collected from Shuguang Hospital show that the F-1-scores of named entity recognition and relation classification tasks reach 96.89% and 88.51%, which outperform state-of-the-art methods by 1.65% and 1.22%, respectively.
引用
收藏
页码:892 / 897
页数:6
相关论文
共 50 条
  • [31] Automated Intention Mining with Comparatively Fine-tuning BERT
    Sun, Xuan
    Li, Luqun
    Mercaldo, Francesco
    Yang, Yichen
    Santone, Antonella
    Martinelli, Fabio
    2021 5TH INTERNATIONAL CONFERENCE ON NATURAL LANGUAGE PROCESSING AND INFORMATION RETRIEVAL, NLPIR 2021, 2021, : 157 - 162
  • [32] Chinese Relation Extraction of Apple Diseases and Pests Based on BERT and Entity Information
    Guo, Mei
    Zhang, Jiayu
    Geng, Nan
    Geng, Yaojun
    Zhang, Yongliang
    Li, Mei
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, KSEM 2022, PT III, 2022, 13370 : 579 - 592
  • [33] BERT-ERC: Fine-Tuning BERT Is Enough for Emotion Recognition in Conversation
    Qin, Xiangyu
    Wu, Zhiyu
    Zhang, Tingting
    Li, Yanran
    Luan, Jian
    Wang, Bin
    Wang, Li
    Cui, Jinshi
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 11, 2023, : 13492 - 13500
  • [34] Emotion detection in psychological texts by fine-tuning BERT using emotion–cause pair extraction
    Kumar A.
    Jain A.K.
    International Journal of Speech Technology, 2022, 25 (03) : 727 - 743
  • [35] Investigating the effect of different fine-tuning configuration scenarios on agricultural term extraction using BERT
    Panoutsopoulos, Hercules
    Espejo-Garcia, Borja
    Raaijmakers, Stephan
    Wang, Xu
    Fountas, Spyros
    Brewster, Christopher
    COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2024, 225
  • [36] Enhanced Discriminative Fine-Tuning of Large Language Models for Chinese Text Classification
    Song, Jinwang
    Zan, Hongying
    Zhang, Kunli
    2024 INTERNATIONAL CONFERENCE ON ASIAN LANGUAGE PROCESSING, IALP 2024, 2024, : 168 - 174
  • [37] EEBERT: An Emoji-Enhanced BERT Fine-Tuning on Amazon Product Reviews for Text Sentiment Classification
    Narejo, Komal Rani
    Zan, Hongying
    Dharmani, Kheem Parkash
    Zhou, Lijuan
    Alahmadi, Tahani Jaser
    Assam, Muhammad
    Sehito, Nabila
    Ghadi, Yazeed Yasin
    IEEE ACCESS, 2024, 12 : 131954 - 131967
  • [38] Fine-Tuning BERT for Multi-Label Sentiment Analysis in Unbalanced Code-Switching Text
    Tang, Tiancheng
    Tang, Xinhuai
    Yuan, Tianyi
    IEEE ACCESS, 2020, 8 (08): : 193248 - 193256
  • [39] A neural joint model for entity and relation extraction from biomedical text
    Li, Fei
    Zhang, Meishan
    Fu, Guohong
    Ji, Donghong
    BMC BIOINFORMATICS, 2017, 18
  • [40] An Autoregressive Text-to-Graph Framework for Joint Entity and Relation Extraction
    Zaratiana, Urchade
    Tomeh, Nadi
    Holat, Pierre
    Charnois, Thierry
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 17, 2024, : 19477 - 19487