MAFE: Multi-modal Alignment via Mutual Information Maximum Perspective in Multi-modal Fake News Detection

被引:0
|
作者
Qin, Haimei [1 ,2 ]
Jing, Yaqi [3 ]
Duan, Yunqiang [3 ]
Jiang, Lei [1 ,2 ]
机构
[1] Chinese Acad Sci, Inst Informat Engn, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Sch Cyber Secur, Beijing, Peoples R China
[3] Coordinat Ctr China, Natl Comp Network Emergency Response Tech Team, Beijing, Peoples R China
关键词
Social media; Multi-modal Fake News Detection; Multi-modal Alignment;
D O I
10.1109/CSCWD61410.2024.10580548
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
With the rapid advancement of social media, the spread of fake news in multi-modal forms has become increasingly prevalent, causing not only individual misguidance but also disruption to the social order. Many researchers have proposed methods that utilize both visual and textual information for fake news detection. Although researchers have made such valuable attempts, current methods still struggle due to the following crucial challenges: calculating the relevance of different modalities and comprehending their influence on decision-making in the context of fake news detection. To overcome these challenges, we propose MAFE-a Multi-modal Alignment FakE news detection method via mutual information maximum perspective. By jointly modeling the multi-modal context information on a multi-modal alignment module, we present mutual information maximum perspective to align different modalities. To achieve this, we leverage text encoder and image encoder to learn better representations for text and images, respectively. Subsequently, these features are fed into the multi-modal alignment module to capture interactions between the two modalities and compute their relevance. Finally, an attention mechanism is introduced to weigh the features, guiding the stages of feature-fusing and decision-making. The effectiveness of the MAFE model is demonstrated through extensive testing on two real datasets.
引用
收藏
页码:1515 / 1521
页数:7
相关论文
共 50 条
  • [1] Is Multi-Modal Necessarily Better? Robustness Evaluation of Multi-Modal Fake News Detection
    Chen, Jinyin
    Jia, Chengyu
    Zheng, Haibin
    Chen, Ruoxi
    Fu, Chenbo
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2023, 10 (06): : 3144 - 3158
  • [2] Multi-modal Chinese Fake News Detection
    Huang, Wenxi
    Zhao, Zhangyi
    Chen, Xiaojun
    Li, Mark Junjie
    Zhang, Qin
    Fournier-Viger, Philippe
    2023 23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW 2023, 2023, : 109 - 117
  • [3] Multi-modal transformer for fake news detection
    Yang, Pingping
    Ma, Jiachen
    Liu, Yong
    Liu, Meng
    MATHEMATICAL BIOSCIENCES AND ENGINEERING, 2023, 20 (08) : 14699 - 14717
  • [4] Leveraging Supplementary Information for Multi-Modal Fake News Detection
    Ho, Chia-Chun
    Dai, Bi-Ru
    2023 INTERNATIONAL CONFERENCE ON INFORMATION AND COMMUNICATION TECHNOLOGIES FOR DISASTER MANAGEMENT, ICT-DM, 2023, : 50 - 54
  • [5] ConvNet frameworks for multi-modal fake news detection
    Chahat Raj
    Priyanka Meel
    Applied Intelligence, 2021, 51 : 8132 - 8148
  • [6] An effective strategy for multi-modal fake news detection
    Xu Peng
    Bao Xintong
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (10) : 13799 - 13822
  • [7] An effective strategy for multi-modal fake news detection
    Xu Peng
    Bao Xintong
    Multimedia Tools and Applications, 2022, 81 : 13799 - 13822
  • [8] Multi-Modal Component Embedding for Fake News Detection
    Kang, SeongKu
    Hwang, Junyoung
    Yu, Hwanjo
    PROCEEDINGS OF THE 2020 14TH INTERNATIONAL CONFERENCE ON UBIQUITOUS INFORMATION MANAGEMENT AND COMMUNICATION (IMCOM), 2020,
  • [9] ConvNet frameworks for multi-modal fake news detection
    Raj, Chahat
    Meel, Priyanka
    APPLIED INTELLIGENCE, 2021, 51 (11) : 8132 - 8148
  • [10] SpotFake: A Multi-modal Framework for Fake News Detection
    Singhal, Shivangi
    Shah, Rajiv Ratn
    Chakraborty, Tanmoy
    Kumaraguru, Ponnurangam
    Satoh, Shin'ichi
    2019 IEEE FIFTH INTERNATIONAL CONFERENCE ON MULTIMEDIA BIG DATA (BIGMM 2019), 2019, : 39 - 47