Multi-level Interaction Network for Multi-Modal Rumor Detection

被引:0
|
作者
Zou, Ting [1 ]
Qian, Zhong [1 ]
Li, Peifeng [1 ]
机构
[1] Soochow Univ, Sch Comp Sci & Technol, Suzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
multi-modal rumor detection; multi-level interaction network; external knowledge; multi-modal fusion;
D O I
10.1109/IJCNN54540.2023.10191639
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The rapid development of social platforms has intensified the creation and spread of rumors. Hence, automatic Rumor Detection (RD) is an important and urgent task to maintain public interests and social harmony. As one of the frontier subtasks in RD, Multi-Modal Rumor Detection (MMRD) has become a new research hotspot currently. Previous methods focused on inferring clues from media content, ignoring the rich knowledge contained in texts and images. Moreover, existing methods are limited to cascade operators to encode multi-modal relationships, which cannot reflect the interactions between multiple modalities. In this paper, we propose a novel Multi-level Interaction Network (MIN), which regards entities and their relevant external knowledge as priori knowledge to provide additional features. Meanwhile, in MIN, we design a Co-Attention Network (CAN) to implement three-level interactions (i.e., the interaction between entities and image, text and external knowledge, refined text and refined image) for multi-modal fusion. Experimental results on the three public datasets (i.e., Fakeddit, Pheme and Weibo) demonstrate that our MIN model outperforms the state-of-the-arts.
引用
收藏
页数:8
相关论文
共 50 条
  • [41] A robust multi-level sparse classifier with multi-modal feature extraction for face recognition
    Vishwakarma, Virendra P.
    Mishra, Gargi
    INTERNATIONAL JOURNAL OF APPLIED PATTERN RECOGNITION, 2019, 6 (01) : 76 - 102
  • [42] SiamMMF: multi-modal multi-level fusion object tracking based on Siamese networks
    Zhen Yang
    Peng Huang
    Dunyun He
    Zhongwang Cai
    Zhijian Yin
    Machine Vision and Applications, 2023, 34
  • [43] Multi-Modal Interaction Device
    Kim, Yul Hee
    Byeon, Sang-Kyu
    Kim, Yu-Joon
    Choi, Dong-Soo
    Kim, Sang-Youn
    INTERNATIONAL CONFERENCE ON MECHANICAL DESIGN, MANUFACTURE AND AUTOMATION ENGINEERING (MDMAE 2014), 2014, : 327 - 330
  • [44] Multi-modal interaction in biomedicine
    Zudilova, EV
    Sloot, PMA
    AMBIENT INTELLIGENCE FOR SCIENTIFIC DISCOVERY: FOUNDATIONS, THEORIES, AND SYSTEMS, 2005, 3345 : 184 - 201
  • [45] MMM-GCN: Multi-Level Multi-Modal Graph Convolution Network for Video-Based Person Identification
    Liao, Ziyan
    Di, Dening
    Hao, Jingsong
    Zhang, Jiang
    Zhu, Shulei
    Yin, Jun
    MULTIMEDIA MODELING, MMM 2023, PT I, 2023, 13833 : 3 - 15
  • [46] Multi-modal Neural Network for Traffic Event Detection
    Chen, Qi
    Wang, Wei
    2019 IEEE 2ND INTERNATIONAL CONFERENCE ON ELECTRONICS AND COMMUNICATION ENGINEERING (ICECE 2019), 2019, : 26 - 30
  • [47] A multi-modal fusion YoLo network for traffic detection
    Zheng, Xinwang
    Zheng, Wenjie
    Xu, Chujie
    COMPUTATIONAL INTELLIGENCE, 2024, 40 (02)
  • [48] Multi-modal object detection via transformer network
    Liu, Wenbing
    Wang, Haibo
    Gao, Quanxue
    Zhu, Zhaorui
    IET IMAGE PROCESSING, 2023, 17 (12) : 3541 - 3550
  • [49] Multi-modal multi-hop interaction network for dialogue response generation
    Zhou, Jie
    Tian, Junfeng
    Wang, Rui
    Wu, Yuanbin
    Yan, Ming
    He, Liang
    Huang, Xuanjing
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 227
  • [50] Multi-modal network Protocols
    Balan, RK
    Akella, A
    Seshan, S
    ACM SIGCOMM COMPUTER COMMUNICATION REVIEW, 2002, 32 (01) : 60 - 60