KBioXLM: A Knowledge-anchored Biomedical Multilingual Pretrained Language Model

被引:0
|
作者
Geng, Lei [1 ]
Yan, Xu [1 ]
Cao, Ziqiang [1 ]
Li, Juntao [1 ]
Li, Wenjie [3 ]
Li, Sujian [2 ]
Zhou, Xinjie [4 ]
Yang, Yang [4 ]
Zhang, Jun [5 ]
机构
[1] Soochow Univ, Inst Artificial Intelligence, Suzhou, Peoples R China
[2] Peking Univ, Beijing, Peoples R China
[3] Hong Kong Polytech Univ, Hong Kong, Peoples R China
[4] Pharmcube, Beijing, Peoples R China
[5] Changping Lab, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
CORPUS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Most biomedical pretrained language models are monolingual and cannot handle the growing cross-lingual requirements. The scarcity of non-English domain corpora, not to mention parallel data, poses a significant hurdle in training multilingual biomedical models. Since knowledge forms the core of domain-specific corpora and can be translated into various languages accurately, we propose a model called KBioXLM, which transforms the multilingual pretrained model XLM-R into the biomedical domain using a knowledge-anchored approach. We achieve a biomedical multilingual corpus by incorporating three granularity knowledge alignments (entity, fact, and passage levels) into monolingual corpora. Then we design three corresponding training tasks (entity masking, relation masking, and passage relation prediction) and continue training on top of the XLM-R model to enhance its domain crosslingual ability. To validate the effectiveness of our model, we translate the English benchmarks of multiple tasks into Chinese. Experimental results demonstrate that our model significantly outperforms monolingual and multilingual pretrained models in cross-lingual zero-shot and few-shot scenarios, achieving improvements of up to 10+ points. Our code is publicly available at https://github.com/ ngwlh-gl/KBioXLM.
引用
收藏
页码:11239 / 11250
页数:12
相关论文
共 50 条
  • [21] Multimodal Dialog Systems with Dual Knowledge-enhanced Generative Pretrained Language Model
    Chen, Xiaolin
    Song, Xuemeng
    Jing, Liqiang
    Li, Shuo
    Hu, Linmei
    Nie, Liqiang
    ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2024, 42 (02)
  • [22] CnGeoPLM: Contextual knowledge selection and embedding with pretrained language representation model for the geoscience domain
    Ma, Kai
    Zheng, Shuai
    Tian, Miao
    Qiu, Qinjun
    Tan, Yongjian
    Hu, Xinxin
    Li, HaiYan
    Xie, Zhong
    EARTH SCIENCE INFORMATICS, 2023, 16 (04) : 3629 - 3646
  • [23] Knowledge Enhanced Language Model for Biomedical Natural Language Processing: Introducing a New Language Model for BioNLP
    Naseem, Usman
    Zhang, Qi
    Hu, Liang
    Hussain, Sadam
    Wang, Shoujin
    IEEE SYSTEMS MAN AND CYBERNETICS MAGAZINE, 2025, 11 (01): : 89 - 94
  • [24] AMMU: A survey of transformer-based biomedical pretrained language models
    Kalyan, Katikapalli Subramanyam
    Rajasekharan, Ajit
    Sangeetha, Sivanesan
    JOURNAL OF BIOMEDICAL INFORMATICS, 2022, 126
  • [25] Semantic transference for enriching multilingual biomedical knowledge resources
    Perez, Maria
    Berlanga, Rafael
    JOURNAL OF BIOMEDICAL INFORMATICS, 2015, 58 : 1 - 10
  • [26] Leveraging Wikipedia knowledge to classify multilingual biomedical documents
    Mourino Garcia, Marcos Antonio
    Perez Rodriguez, Roberto
    Anido Rifon, Luis
    ARTIFICIAL INTELLIGENCE IN MEDICINE, 2018, 88 : 37 - 57
  • [27] YNU-HPCC at SemEval-2023 Task 9: Pretrained Language Model for Multilingual Tweet Intimacy Analysis
    Cai, Qisheng
    Wang, Jin
    Zhang, Xuejie
    17TH INTERNATIONAL WORKSHOP ON SEMANTIC EVALUATION, SEMEVAL-2023, 2023, : 733 - 738
  • [28] TRANSLICO: A Contrastive Learning Framework to Address the Script Barrier in Multilingual Pretrained Language Models
    Liu, Yihong
    Ma, Chunlan
    Ye, Haotian
    Schuetze, Hinrich
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 2476 - 2499
  • [29] Pretrained Language Model Embryology: The Birth of ALBERT
    Chiang, Cheng-Han
    Huang, Sung-Feng
    Lee, Hung-Yi
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 6813 - 6828
  • [30] SCIBERT: A Pretrained Language Model for Scientific Text
    Beltagy, Iz
    Lo, Kyle
    Cohan, Arman
    2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019): PROCEEDINGS OF THE CONFERENCE, 2019, : 3615 - 3620