MERGE: A Modal Equilibrium Relational Graph Framework for Multi-Modal Knowledge Graph Completion

被引:0
|
作者
Shang, Yuying [1 ,2 ,3 ,4 ]
Fu, Kun [1 ,2 ,3 ]
Zhang, Zequn [1 ,2 ]
Jin, Li [1 ,2 ]
Liu, Zinan [1 ,3 ,4 ]
Wang, Shensi [1 ,2 ,3 ,4 ]
Li, Shuchao [1 ,2 ]
机构
[1] Chinese Acad Sci, Aerosp Informat Res Inst, Beijing 100094, Peoples R China
[2] Chinese Acad Sci, Aerosp Informat Res Inst, Key Lab Network Informat Syst Technol NIST, Beijing 100190, Peoples R China
[3] Univ Chinese Acad Sci, Beijing 100190, Peoples R China
[4] Univ Chinese Acad Sci, Sch Elect Elect & Commun Engn, Beijing 100094, Peoples R China
基金
中国国家自然科学基金;
关键词
multi-modal knowledge graph; knowledge graph representation; graph attention network; information integration;
D O I
10.3390/s24237605
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
The multi-modal knowledge graph completion (MMKGC) task aims to automatically mine the missing factual knowledge from the existing multi-modal knowledge graphs (MMKGs), which is crucial in advancing cross-modal learning and reasoning. However, few methods consider the adverse effects caused by different missing modal information in the model learning process. To address the above challenges, we innovatively propose a Modal Equilibrium Relational Graph framEwork, called MERGE. By constructing three modal-specific directed relational graph attention networks, MERGE can implicitly represent missing modal information for entities by aggregating the modal embeddings from neighboring nodes. Subsequently, a fusion approach based on low-rank tensor decomposition is adopted to align multiple modal features in both the explicit structural level and the implicit semantic level, utilizing the structural information inherent in the original knowledge graphs, which enhances the interpretability of the fused features. Furthermore, we introduce a novel interpolation re-ranking strategy to adjust the importance of modalities during inference while preserving the semantic integrity of each modality. The proposed framework has been validated on four publicly available datasets, and the experimental results have demonstrated the effectiveness and robustness of our method in the MMKGC task.
引用
收藏
页数:30
相关论文
共 50 条
  • [31] Self-supervised opinion summarization with multi-modal knowledge graph
    Jin, Lingyun
    Chen, Jingqiang
    JOURNAL OF INTELLIGENT INFORMATION SYSTEMS, 2024, 62 (01) : 191 - 208
  • [32] AspectMMKG: A Multi-modal Knowledge Graph with Aspect-aware Entities
    Zhang, Jingdan
    Wang, Jiaan
    Wang, Xiaodan
    Li, Zhixu
    Xiao, Yanghua
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 3361 - 3370
  • [33] A Zero-shot Learning Method with a Multi-modal Knowledge Graph
    Zhang, Yuhong
    Shu, Haitao
    Bu, Chenyang
    Hu, Xuegang
    2022 IEEE 34TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, ICTAI, 2022, : 391 - 395
  • [34] Multi-Modal Spatio-Temporal Knowledge Graph of Ship Management
    Zhang, Yitao
    Xu, Ruiqing
    Lu, Wangping
    Mayer, Wolfgang
    Ning, Da
    Duan, Yucong
    Zeng, Xi
    Feng, Zaiwen
    APPLIED SCIENCES-BASEL, 2023, 13 (16):
  • [35] Classification of multi-modal remote sensing images based on knowledge graph
    Fang, Jianyong
    Yan, Xuefeng
    INTERNATIONAL JOURNAL OF REMOTE SENSING, 2023, 44 (15) : 4815 - 4835
  • [36] Multi-modal Question Answering System Driven by Domain Knowledge Graph
    Zhao, Zhengwei
    Wang, Xiaodong
    Xu, Xiaowei
    Wang, Qing
    5TH INTERNATIONAL CONFERENCE ON BIG DATA COMPUTING AND COMMUNICATIONS (BIGCOM 2019), 2019, : 43 - 47
  • [37] Representation and Fusion Based on Knowledge Graph in Multi-Modal Semantic Communication
    Xing, Chenlin
    Lv, Jie
    Luo, Tao
    Zhang, Zhilong
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2024, 13 (05) : 1344 - 1348
  • [38] Graph Convolutional Incomplete Multi-modal Hashing
    Shen, Xiaobo
    Chen, Yinfan
    Pan, Shirui
    Liu, Weiwei
    Zheng, Yuhui
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 7029 - 7037
  • [39] Multi-Modal Graph Learning for Disease Prediction
    Zheng, Shuai
    Zhu, Zhenfeng
    Liu, Zhizhe
    Guo, Zhenyu
    Liu, Yang
    Yang, Yuchen
    Zhao, Yao
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2022, 41 (09) : 2207 - 2216
  • [40] Multi-modal graph learning for disease prediction
    Zheng, Shuai
    Zhu, Zhenfeng
    Liu, Zhizhe
    Guo, Zhenyu
    Liu, Yang
    Zhao, Yao
    arXiv, 2021,