MGKGR: Multimodal Semantic Fusion for Geographic Knowledge Graph Representation

被引:0
|
作者
Zhang, Jianqiang [1 ]
Chen, Renyao [1 ]
Li, Shengwen [1 ,2 ,3 ]
Li, Tailong [4 ]
Yao, Hong [1 ,2 ,3 ,4 ]
机构
[1] China Univ Geosci, Sch Comp Sci, Wuhan 430074, Peoples R China
[2] China Univ Geosci, State Key Lab Biogeol & Environm Geol, Wuhan 430074, Peoples R China
[3] China Univ Geosci, Hubei Key Lab Intelligent Geoinformat Proc, Wuhan 430078, Peoples R China
[4] China Univ Geosci, Sch Future Technol, Wuhan 430074, Peoples R China
关键词
multimodal; geographic knowledge graph; knowledge graph representation;
D O I
10.3390/a17120593
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Geographic knowledge graph representation learning embeds entities and relationships in geographic knowledge graphs into a low-dimensional continuous vector space, which serves as a basic method that bridges geographic knowledge graphs and geographic applications. Previous geographic knowledge graph representation methods primarily learn the vectors of entities and their relationships from their spatial attributes and relationships, which ignores various semantics of entities, resulting in poor embeddings on geographic knowledge graphs. This study proposes a two-stage multimodal geographic knowledge graph representation (MGKGR) model that integrates multiple kinds of semantics to improve the embedding learning of geographic knowledge graph representation. Specifically, in the first stage, a spatial feature fusion method for modality enhancement is proposed to combine the structural features of geographic knowledge graphs with two modal semantic features. In the second stage, a multi-level modality feature fusion method is proposed to integrate heterogeneous features from different modalities. By fusing the semantics of text and images, the performance of geographic knowledge graph representation is improved, providing accurate representations for downstream geographic intelligence tasks. Extensive experiments on two datasets show that the proposed MGKGR model outperforms the baselines. Moreover, the results demonstrate that integrating textual and image data into geographic knowledge graphs can effectively enhance the model's performance.
引用
收藏
页数:16
相关论文
共 50 条
  • [41] Graph Fusion for Finger Multimodal Biometrics
    Zhang, Haigang
    Li, Shuyi
    Shi, Yihua
    Yang, Jinfeng
    IEEE ACCESS, 2019, 7 : 28607 - 28615
  • [42] Semantic Web and knowledge representation
    Zarri, GP
    13TH INTERNATIONAL WORKSHOP ON DATABASE AND EXPERT SYSTEMS APPLICATIONS, PROCEEDINGS, 2002, : 75 - 79
  • [43] MRFTrans: Multimodal Representation Fusion Transformer for monocular 3D semantic scene completion
    Xu, Rongtao
    Zhang, Jiguang
    Sun, Jiaxi
    Wang, Changwei
    Wu, Yifan
    Xu, Shibiao
    Meng, Weiliang
    Zhang, Xiaopeng
    INFORMATION FUSION, 2024, 111
  • [44] On virtual geographic environments for geographic knowledge representation and sharing
    Lin H.
    Zhang C.
    Chen M.
    Zheng X.
    Yaogan Xuebao/Journal of Remote Sensing, 2016, 20 (05): : 1290 - 1298
  • [45] Textual Knowledge Representation through the Semantic-based Graph Structure in Clustering Applications
    Wu, Jiangning
    Dang, Yanzhong
    Pan, Donghua
    Xuan, Zhaoguo
    Liu, Qiaofeng
    43RD HAWAII INTERNATIONAL CONFERENCE ON SYSTEMS SCIENCES VOLS 1-5 (HICSS 2010), 2010, : 3398 - 3405
  • [46] Knowledge graph representation method for semantic 3D modeling of Chinese grottoes
    Yang, Su
    Hou, Miaole
    HERITAGE SCIENCE, 2023, 11 (01)
  • [47] Knowledge graph representation method for semantic 3D modeling of Chinese grottoes
    Su Yang
    Miaole Hou
    Heritage Science, 11
  • [48] Semantic hyper-graph-based knowledge representation architecture for complex product development
    Wu, Zhenyong
    Liao, Jihua
    Song, Wenyan
    Mao, Hanling
    Huang, Zhenfeng
    Li, Xinxin
    Mao, Hanying
    COMPUTERS IN INDUSTRY, 2018, 100 : 43 - 56
  • [49] Semantic Neighbor Graph Hashing for Multimodal Retrieval
    Jin, Lu
    Li, Kai
    Hu, Hao
    Qi, Guo-Jun
    Tang, Jinhui
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (03) : 1405 - 1417
  • [50] SEMANTIC GRAPH KNOWLEDGE REPRESENTATION FOR AL-QURAN VERSES BASED ON WORD DEPENDENCIES
    Khazani, Muhammad Muhtadi Mohamad
    Mohamed, Hassan
    Sembok, Tengku Mohd Tengku
    Yusop, Nurhafizah Moziyana Mohd
    Wani, Sharyar
    Gulzar, Yonis
    Halip, Mohd Hazali Mohamed
    Marzukh, Syahaneim
    Yunos, Zahri
    MALAYSIAN JOURNAL OF COMPUTER SCIENCE, 2021, : 132 - 153