Multi-modal Multi-relational Feature Aggregation Network for Medical Knowledge Representation Learning

被引:11
|
作者
Zhang, Yingying [1 ]
Fang, Quan [1 ]
Qian, Shengsheng [1 ]
Xu, Changsheng [1 ,2 ]
机构
[1] Chinese Acad Sci, Inst Automat, Univ Chinese Acad Sci, Beijing, Peoples R China
[2] Peng Cheng Lab, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
knowledge graph; heterogeneous graph; attention mechanism;
D O I
10.1145/3394171.3413736
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Representation learning of medical Knowledge Graph (KG) is an important task and forms the fundamental process for intelligent medical applications such as disease diagnosis and healthcare question answering. Therefore, many embedding models have been proposed to learn vector presentations for entities and relations but they ignore three important properties of medical KG: multi-modal, unbalanced and heterogeneous. Entities in the medical KG can carry unstructured multi-modal content, such as image and text. At the same time, the knowledge graph consists of multiple types of entities and relations, and each entity has various number of neighbors. In this paper, we propose a Multi-modal Multi-Relational Feature Aggregation Network (MMRFAN) for medical knowledge representation learning. To deal with the multi-modal content of the entity, we propose an adversarial feature learning model to map the textual and image information of the entity into the same vector space and learn the multi-modal common representation. To better capture the complex structure and rich semantics, we design a sampling mechanism and aggregate the neighbors with intra and inter-relation attention. We evaluate our model on three knowledge graphs, including FB15k-237, IMDb and Symptoms-in-Chinese with link prediction and node classification tasks. Experimental results show that our approach outperforms state-of-the-art method.
引用
收藏
页码:3956 / 3965
页数:10
相关论文
共 50 条
  • [41] Multi-modal anchor adaptation learning for multi-modal summarization
    Chen, Zhongfeng
    Lu, Zhenyu
    Rong, Huan
    Zhao, Chuanjun
    Xu, Fan
    NEUROCOMPUTING, 2024, 570
  • [42] Metric learning with multi-relational data
    Pan, Jiajun
    Le Capitaine, Hoel
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024,
  • [43] Multi-Modal Knowledge Representation Learning via Webly-Supervised Relationships Mining
    Nian, Fudong
    Bao, Bing-Kun
    Li, Teng
    Xu, Changsheng
    PROCEEDINGS OF THE 2017 ACM MULTIMEDIA CONFERENCE (MM'17), 2017, : 411 - 419
  • [44] Hyper-node Relational Graph Attention Network for Multi-modal Knowledge Graph Completion
    Liang, Shuang
    Zhu, Anjie
    Zhang, Jiasheng
    Shao, Jie
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2023, 19 (02)
  • [45] Multi-modal Knowledge-aware Reinforcement Learning Network for Explainable Recommendation
    Tao, Shaohua
    Qiu, Runhe
    Ping, Yuan
    Ma, Hui
    KNOWLEDGE-BASED SYSTEMS, 2021, 227
  • [46] A Multi-relational Learning Approach for Knowledge Extraction in in Vitro Fertilization Domain
    Basile, Teresa M. A.
    Esposito, Floriana
    Caponetti, Laura
    ADVANCES IN VISUAL COMPUTING, PT I, 2010, 6453 : 571 - 581
  • [47] Graph-Text Multi-Modal Pre-training for Medical Representation Learning
    Park, Sungjin
    Bae, Seongsu
    Kim, Jiho
    Kim, Tackeun
    Choi, Edward
    CONFERENCE ON HEALTH, INFERENCE, AND LEARNING, VOL 174, 2022, 174 : 261 - 281
  • [48] Multi-relational data mining in medical databases
    Habrard, A
    Bernard, M
    Jacquenet, F
    ARTIFICIAL INTELLIGENCE IN MEDICINE, PROCEEDINGS, 2003, 2780 : 365 - 374
  • [49] MERGE: A Modal Equilibrium Relational Graph Framework for Multi-Modal Knowledge Graph Completion
    Shang, Yuying
    Fu, Kun
    Zhang, Zequn
    Jin, Li
    Liu, Zinan
    Wang, Shensi
    Li, Shuchao
    SENSORS, 2024, 24 (23)
  • [50] MDANet: Multi-Modal Deep Aggregation Network for Depth Completion
    Ke, Yanjie
    Li, Kun
    Yang, Wei
    Xu, Zhenbo
    Hao, Dayang
    Huang, Liusheng
    Wang, Gang
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 4288 - 4294