Explainable reasoning over temporal knowledge graphs by pre-trained language model

被引:0
|
作者
Li, Qing [1 ]
Wu, Guanzhong [1 ]
机构
[1] Northwestern Polytech Univ, Sch Comp Sci, Xian 710000, Shaanxi, Peoples R China
基金
中国国家自然科学基金;
关键词
Knowledge graph reasoning; Pre-trained language model; Logical reasoning; Temporal knowledge graph; Multi-hop paths; Graph representation learning;
D O I
10.1016/j.ipm.2024.103903
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Temporal knowledge graph reasoning (TKGR) has been considered as a crucial task for modeling the evolving knowledge, aiming to infer the unknown connections between entities at specific times. Traditional TKGR methods try to aggregate structural information between entities and evolve representations of entities over distinct snapshots, while some other methods attempt to extract temporal logic rules from historical interactions. However, these methods fail to address the continuously emerging unseen entities over time and ignore the historical dependencies between entities and relations. To overcome these limitations, we propose a novel method, termed TPNet, which introduces historical information completion strategy (HICS) and pre-trained language model (PLM) to conduct explainable inductive reasoning over TKGs. Specifically, TPNet extracts reliable temporal logical paths from historical subgraphs using a temporal-correlated search strategy. For unseen entities, we utilize HICS to sample or generate paths to supplement their historical information. Besides, a PLM and a time-aware encoder are introduced to jointly encode the temporal paths, thereby comprehensively capturing dependencies between entities and relations. Moreover, the semantic similarity between the query quadruples and the extracted paths is evaluated to simultaneously optimize the representations of entities and relations. Extensive experiments on entity and relation prediction tasks are conducted to evaluate the performance of TPNet. The experimental results on four benchmark datasets demonstrate the superiority of TPNet over state-of-the-art TKGR methods, achieving improvements of 14.35%, 23.08%, 6.75% and 5.38% on MRR, respectively.
引用
收藏
页数:15
相关论文
共 50 条
  • [31] Surgicberta: a pre-trained language model for procedural surgical language
    Bombieri, Marco
    Rospocher, Marco
    Ponzetto, Simone Paolo
    Fiorini, Paolo
    INTERNATIONAL JOURNAL OF DATA SCIENCE AND ANALYTICS, 2024, 18 (01) : 69 - 81
  • [32] Improving Math Word Problems with Pre-trained Knowledge and Hierarchical Reasoning
    Yu, Weijiang
    Wen, Yingpeng
    Zheng, Fudan
    Xiao, Nong
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 3384 - 3394
  • [33] Probing Simile Knowledge from Pre-trained Language Models
    Chen, Weijie
    Chang, Yongzhu
    Zhang, Rongsheng
    Pu, Jiashu
    Chen, Guandan
    Zhang, Le
    Xi, Yadong
    Chen, Yijiang
    Su, Chang
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 5875 - 5887
  • [34] ProSide: Knowledge Projector and Sideway for Pre-trained Language Models
    He, Chaofan
    Lu, Gewei
    Shen, Liping
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, PT II, NLPCC 2024, 2025, 15360 : 56 - 68
  • [35] Continual knowledge infusion into pre-trained biomedical language models
    Jha, Kishlay
    Zhang, Aidong
    BIOINFORMATICS, 2022, 38 (02) : 494 - 502
  • [36] AdaDS: Adaptive data selection for accelerating pre-trained language model knowledge distillation
    Zhou, Qinhong
    Li, Peng
    Liu, Yang
    Guan, Yuyang
    Xing, Qizhou
    Chen, Ming
    Sun, Maosong
    Liu, Yang
    AI OPEN, 2023, 4 : 56 - 63
  • [37] Exploring Named Entity Recognition in Medical Knowledge Graphs with Pre-trained Language Models and Attention Mechanism
    Zhang, Junsong
    Hamdulla, Askar
    Tohti, Turdi
    COMPUTER APPLICATIONS, CCF NCCA 2024, PT I, 2024, 2274 : 167 - 182
  • [38] Pre-trained Language Model for Biomedical Question Answering
    Yoon, Wonjin
    Lee, Jinhyuk
    Kim, Donghyeon
    Jeong, Minbyul
    Kang, Jaewoo
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2019, PT II, 2020, 1168 : 727 - 740
  • [39] BERTweet: A pre-trained language model for English Tweets
    Dat Quoc Nguyen
    Thanh Vu
    Anh Tuan Nguyen
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING: SYSTEM DEMONSTRATIONS, 2020, : 9 - 14
  • [40] ViDeBERTa: A powerful pre-trained language model for Vietnamese
    Tran, Cong Dao
    Pham, Nhut Huy
    Nguyen, Anh
    Hy, Truong Son
    Vu, Tu
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 1071 - 1078