Multi-level Interaction Network for Multi-Modal Rumor Detection

被引:0
|
作者
Zou, Ting [1 ]
Qian, Zhong [1 ]
Li, Peifeng [1 ]
机构
[1] Soochow Univ, Sch Comp Sci & Technol, Suzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
multi-modal rumor detection; multi-level interaction network; external knowledge; multi-modal fusion;
D O I
10.1109/IJCNN54540.2023.10191639
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The rapid development of social platforms has intensified the creation and spread of rumors. Hence, automatic Rumor Detection (RD) is an important and urgent task to maintain public interests and social harmony. As one of the frontier subtasks in RD, Multi-Modal Rumor Detection (MMRD) has become a new research hotspot currently. Previous methods focused on inferring clues from media content, ignoring the rich knowledge contained in texts and images. Moreover, existing methods are limited to cascade operators to encode multi-modal relationships, which cannot reflect the interactions between multiple modalities. In this paper, we propose a novel Multi-level Interaction Network (MIN), which regards entities and their relevant external knowledge as priori knowledge to provide additional features. Meanwhile, in MIN, we design a Co-Attention Network (CAN) to implement three-level interactions (i.e., the interaction between entities and image, text and external knowledge, refined text and refined image) for multi-modal fusion. Experimental results on the three public datasets (i.e., Fakeddit, Pheme and Weibo) demonstrate that our MIN model outperforms the state-of-the-arts.
引用
收藏
页数:8
相关论文
共 50 条
  • [31] MLF3D: Multi-Level Fusion for Multi-Modal 3D Object Detection
    Jiang, Han
    Wang, Jianbin
    Xiao, Jianru
    Zhao, Yanan
    Chen, Wanqing
    Ren, Yilong
    Yu, Haiyang
    2024 35TH IEEE INTELLIGENT VEHICLES SYMPOSIUM, IEEE IV 2024, 2024, : 1588 - 1593
  • [32] Hierarchical graph attention networks for multi-modal rumor detection on social media
    Xu, Fan
    Zeng, Lei
    Huang, Qi
    Yan, Keyu
    Wang, Mingwen
    Sheng, Victor S.
    NEUROCOMPUTING, 2024, 569
  • [33] Multi-level, multi-modal interactions for visual question answering over text in images
    Chen, Jincai
    Zhang, Sheng
    Zeng, Jiangfeng
    Zou, Fuhao
    Li, Yuan-Fang
    Liu, Tao
    Lu, Ping
    World Wide Web, 2022, 25 (04) : 1607 - 1623
  • [34] Multi-level, multi-modal interactions for visual question answering over text in images
    Jincai Chen
    Sheng Zhang
    Jiangfeng Zeng
    Fuhao Zou
    Yuan-Fang Li
    Tao Liu
    Ping Lu
    World Wide Web, 2022, 25 : 1607 - 1623
  • [35] Explanation as a Process: User-Centric Construction of Multi-level and Multi-modal Explanations
    Finzel, Bettina
    Tafler, David E.
    Scheele, Stephan
    Schmid, Ute
    ADVANCES IN ARTIFICIAL INTELLIGENCE, KI 2021, 2021, 12873 : 80 - 94
  • [36] Complex Multi-modal Multi-level Influence Networks - Affordable Housing Case Study
    Beautement, Patrick
    Broenner, Christine
    COMPLEX SCIENCES, PT 2, 2009, 5 : 2054 - 2063
  • [37] Multi-level, multi-modal interactions for visual question answering over text in images
    Chen, Jincai
    Zhang, Sheng
    Zeng, Jiangfeng
    Zou, Fuhao
    Li, Yuan-Fang
    Liu, Tao
    Lu, Ping
    WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2022, 25 (04): : 1607 - 1623
  • [38] MRCap: Multi-modal and Multi-level Relationship-based Dense Video Captioning
    Chen, Wei
    Niu, Jianwei
    Liu, Xuefeng
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 2615 - 2620
  • [39] MLSFF: Multi-level structural features fusion for multi-modal knowledge graph completion
    Zhai, Hanming
    Lv, Xiaojun
    Hou, Zhiwen
    Tong, Xin
    Bu, Fanliang
    MATHEMATICAL BIOSCIENCES AND ENGINEERING, 2023, 20 (08) : 14096 - 14116
  • [40] SiamMMF: multi-modal multi-level fusion object tracking based on Siamese networks
    Yang, Zhen
    Huang, Peng
    He, Dunyun
    Cai, Zhongwang
    Yin, Zhijian
    MACHINE VISION AND APPLICATIONS, 2023, 34 (01)