Variational autoencoder densified graph attention for fusing synonymous entities: Model and protocol

被引:1
|
作者
Li, Qian [1 ,2 ]
Wang, Daling [1 ]
Feng, Shi [1 ]
Song, Kaisong [1 ,3 ]
Zhang, Yifei [1 ]
Yu, Ge [1 ]
机构
[1] Northeastern Univ, Sch Comp Sci & Engn, Shenyang, Peoples R China
[2] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore, Singapore
[3] Alibaba Grp, DAMO Acad, Hangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
Open knowledge graph; Knowledge graph representation; Cluster ranking; Link prediction;
D O I
10.1016/j.knosys.2022.110061
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The prediction of missing links of open knowledge graphs (OpenKGs) poses unique challenges compared with well-studied curated knowledge graphs (CuratedKGs). Unlike CuratedKGs whose entities are fully disambiguated against a fixed vocabulary, OpenKGs consist of entities represented by non-canonicalized free-form noun phrases and do not require an ontology specification, which drives the synonymity (multiple entities with different surface forms have the same meaning) and sparsity (a large portion of entities with few links). How to capture synonymous features in such sparse situations and how to evaluate the multiple answers pose challenges to existing models and evaluation protocols. In this paper, we propose VGAT, a variational autoencoder densified graph attention model to automatically mine synonymity features, and propose CR, a cluster ranking protocol to evaluate multiple answers in OpenKGs. For the model, VGAT investigates the following key ideas: (1) phrasal synonymity encoder attempts to capture phrasal features, which can automatically make entities with synonymous texts have closer representations; (2) neighbor synonymity encoder mines structural features with a graph attention network, which can recursively make entities with synonymous neighbors closer in representations. (3) densification attempts to densify the OpenKGs by generating similar embeddings and negative samples. For the protocol, CR is designed from the significance and compactness perspectives to comprehensively evaluate multiple answers. Extensive experiments and analysis show the effectiveness of the VGAT model and rationality of the CR protocol. (c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页数:12
相关论文
共 28 条
  • [1] Adversarial Attention-Based Variational Graph Autoencoder
    Weng, Ziqiang
    Zhang, Weiyu
    Dou, Wei
    IEEE ACCESS, 2020, 8 : 152637 - 152645
  • [2] Variational Autoencoder and Graph Attention Root Cause Localization Model Based on Log Data and Graph Structure
    Ding, Jianli
    Yan, Yanan
    Wang, Jing
    Chen, Tantan
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT IV, ICIC 2024, 2024, 14878 : 17 - 29
  • [3] Community Detection Based on Multiobjective Particle Swarm Optimization and Graph Attention Variational Autoencoder
    Guo, Kun
    Chen, Zhanhong
    Lin, Xu
    Wu, Ling
    Zhan, Zhi-Hui
    Chen, Yuzhong
    Guo, Wenzhong
    IEEE TRANSACTIONS ON BIG DATA, 2023, 9 (02) : 569 - 583
  • [4] Generating synthetic data with variational autoencoder to address class imbalance of graph attention network prediction model for construction management
    Mostofi, Fatemeh
    Tokdemir, Onur Behzat
    Togan, Vedat
    ADVANCED ENGINEERING INFORMATICS, 2024, 62
  • [5] VIGA: A variational graph autoencoder model to infer user interest representations for recommendation
    Gan, Mingxin
    Zhang, Hang
    INFORMATION SCIENCES, 2023, 640
  • [6] An Unsupervised Approach to Wind Turbine Blade Icing Detection Based on Beta Variational Graph Attention Autoencoder
    Wang, Lei
    He, Yigang
    Shao, Kaixuan
    Xing, Zhikai
    Zhou, Yazhong
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2024, 73 : 1 - 12
  • [7] Prediction of microbe-drug associations based on a modified graph attention variational autoencoder and random forest
    Wang, Bo
    Ma, Fangjian
    Du, Xiaoxin
    Zhang, Guangda
    Li, Jingyou
    FRONTIERS IN MICROBIOLOGY, 2024, 15
  • [8] Cognitive Workload Estimation Using Variational Autoencoder and Attention-Based Deep Model
    Chakladar, Debashis Das
    Datta, Sumalyo
    Roy, Partha Pratim
    Prasad, Vinod A.
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2023, 15 (02) : 581 - 590
  • [9] Reconstruction and Generation of Porous Metamaterial Units Via Variational Graph Autoencoder and Large Language Model
    Khanghah, Kiarash Naghavi
    Wang, Zihan
    Xu, Hongyi
    JOURNAL OF COMPUTING AND INFORMATION SCIENCE IN ENGINEERING, 2025, 25 (02)
  • [10] A Novel Vehicle Destination Prediction Model With Expandable Features Using Attention Mechanism and Variational Autoencoder
    Wu, Xiangyang
    Zhu, Weite
    Liu, Zhen
    Zhang, Zhen
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (09) : 16548 - 16557