Multimodal Co-training for Fake News Identification Using Attention-aware Fusion

被引:4
|
作者
Das Bhattacharjee, Sreyasee [1 ]
Yuan, Junsong [1 ]
机构
[1] SUNY Buffalo, Buffalo, NY 14260 USA
来源
关键词
Fake news detection; Rumor; Multimodal classification; Co-training; Attention; Feature fusion;
D O I
10.1007/978-3-031-02444-3_21
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Rapid dissemination of fake news to purportedly mislead the large population of online information sharing platforms is a societal problem receiving increasing attention. A critical challenge in this scenario is that a multimodal information content, e.g., supporting text with photos, shared online, is frequently created with an aim to attract attention of the readers. While 'fakeness' does not exclusively synonymize 'falsity' in general, the objective behind creating such content may vary widely. It may be for depicting additional information to clarify. However, very frequently it may also be for propagating fabricated or biased information to purposefully mislead, or for intentionally manipulating the image to fool the audience. Therefore, our objective in this work is evaluating the veracity of a news content by addressing a two-fold task: (1) if the image or the text component of the content is fabricated and (2) if there are inconsistencies between image and text component of the content, which may prove the image to be out of context. We propose an effective attention-aware joint representation learning framework that learns the comprehensive fine-grained data pattern by correlating each word in the text component to each potential object region in the image component. By designing a novel multimodal co-training mechanism leveraging the class label information within a contrastive loss-based optimization, the proposed method exhibits a significant promise in identifying cross-modal inconsistencies. The consistent out-performances over other state-of-the-art works (both in terms of accuracy and F1-score) in two large-scale datasets, which cover different types of fake news characteristics (defining the information veracity at various layers of details like 'false', 'false connection', 'misleading', and 'manipulative' contents), topics, and domains demonstrate the feasibility of our approach.
引用
收藏
页码:282 / 296
页数:15
相关论文
共 50 条
  • [1] Multimodal Fusion with Co-Attention Networks for Fake News Detection
    Wu, Yang
    Zhan, Pengwei
    Zhang, Yunjian
    Wang, Liming
    Xu, Zhen
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 2560 - 2569
  • [2] Multimodal Fusion with BERT and Attention Mechanism for Fake News Detection
    Nguyen Manh Duc Tuan
    Pham Quang Nhat Minh
    2021 RIVF INTERNATIONAL CONFERENCE ON COMPUTING AND COMMUNICATION TECHNOLOGIES (RIVF 2021), 2021, : 43 - 48
  • [3] Multimodal Relationship-aware Attention Network for Fake News Detection
    Yang, Hongyu
    Zhang, Jinjiao
    Hu, Ze
    Zhang, Liang
    Cheng, Xiang
    2023 INTERNATIONAL CONFERENCE ON DATA SECURITY AND PRIVACY PROTECTION, DSPP, 2023, : 143 - 149
  • [4] MRAN: Multimodal relationship-aware attention network for fake news detection
    Yang, Hongyu
    Zhang, Jinjiao
    Zhang, Liang
    Cheng, Xiang
    Hu, Ze
    COMPUTER STANDARDS & INTERFACES, 2024, 89
  • [5] Multimodal matching-aware co-attention networks with mutual knowledge distillation for fake news detection
    Hu, Linmei
    Zhao, Ziwang
    Qi, Weijian
    Song, Xuemeng
    Nie, Liqiang
    INFORMATION SCIENCES, 2024, 664
  • [6] A mutual attention based multimodal fusion for fake news detection on social network
    Guo, Ying
    APPLIED INTELLIGENCE, 2023, 53 (12) : 15311 - 15320
  • [7] Knowledge-aware multimodal pre-training for fake news detection
    Zhang, Litian
    Zhang, Xiaoming
    Zhou, Ziyi
    Zhang, Xi
    Yu, Philip S.
    Li, Chaozhuo
    INFORMATION FUSION, 2025, 114
  • [8] A mutual attention based multimodal fusion for fake news detection on social network
    Ying Guo
    Applied Intelligence, 2023, 53 : 15311 - 15320
  • [9] AMPLE: Emotion-Aware Multimodal Fusion Prompt Learning for Fake News Detection
    Xu, Xiaoman
    Li, Xiangrun
    Wang, Taihang
    Jiang, Ye
    MULTIMEDIA MODELING, MMM 2025, PT I, 2025, 15520 : 86 - 100
  • [10] The Network of Attention-Aware Multimodal fusion for RGB-D Indoor Semantic Segmentation Method
    Zhao, Qiankun
    Wan, Yingcai
    Fang, Lijin
    Wang, Huaizhen
    2022 34TH CHINESE CONTROL AND DECISION CONFERENCE, CCDC, 2022, : 5093 - 5098