Multi-modal Robustness Fake News Detection with Cross-Modal and Propagation Network Contrastive Learning

被引:0
|
作者
Chen, Han [1 ,2 ]
Wang, Hairong [1 ]
Liu, Zhipeng [1 ]
Li, Yuhua [1 ]
Hu, Yifan [3 ]
Zhang, Yujing [1 ]
Shu, Kai [4 ]
Li, Ruixuan [1 ]
Yu, Philip S. [5 ]
机构
[1] Huazhong Univ Sci & Technol, Sch Comp Sci & Technol, Wuhan 430074, Peoples R China
[2] Huazhong Univ Sci & Technol, Inst Artificial Intelligence, Wuhan 430074, Peoples R China
[3] Univ Sydney, Sch Comp Sci, Sydney 2006, Australia
[4] Emory Univ, Dept Comp Sci, Atlanta, GA 30322 USA
[5] Univ Illinois, Dept Comp Sci, Chicago, IL 60607 USA
基金
中国国家自然科学基金;
关键词
Contrastive learning; Multi-modal; Fake news detection; Limited labeled data; Mismatched pairs scenario;
D O I
10.1016/j.knosys.2024.112800
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Social media has transformed the landscape of news dissemination, characterized by its rapid, extensive, and diverse content, coupled with the challenge of verifying authenticity. The proliferation of multimodal news on these platforms has presented novel obstacles in detecting fake news. Existing approaches typically focus on single modalities, such as text or images, or combine text and image content or with propagation network data. However, the potential for more robust fake news detection lies in considering three modalities simultaneously. In addition, the heavy reliance on labeled data in current detection methods proves time-consuming and costly. To address these challenges, we propose a novel approach, M ulti-modal Robustness F ake News Detection with Cross-Modal and Propagation Network C ontrastive L earning (MFCL). This method integrates intrinsic features from text, images, and propagation networks, capturing essential intermodal relationships for accurate fake news detection. Contrastive learning is employed to learn intrinsic features while mitigating the issue of limited labeled data. Furthermore, we introduce image-text matching (ITM) data augmentation to ensure consistent image-text representations and employ adaptive propagation (AP) network data augmentation for high-order feature learning. We utilize contextual transformers to bolster the effectiveness of fake news detection, unveiling crucial intermodal connections in the process. Experimental results on real-world datasets demonstrate that MFCL outperforms existing methods, maintaining high accuracy and robustness even with limited labeled data and mismatched pairs. Our code is available at https://github.com/HanChen-HUST/KBSMFCL.
引用
收藏
页数:14
相关论文
共 50 条
  • [31] Fake news detection based on multi-modal domain adaptation
    Xiaopei Wang
    Jiana Meng
    Di Zhao
    Xuan Meng
    Hewen Sun
    Neural Computing and Applications, 2025, 37 (7) : 5781 - 5793
  • [32] Utilizing Ensemble Learning for Detecting Multi-Modal Fake News
    Luqman, Muhammad
    Faheem, Muhammad
    Ramay, Waheed Yousuf
    Saeed, Malik Khizar
    Ahmad, Majid Bashir
    IEEE ACCESS, 2024, 12 : 15037 - 15049
  • [33] Fake News Detection Based on Cross-Modal Message Aggregation and Gated Fusion Network
    Shan, Fangfang
    Liu, Mengyao
    Zhang, Menghan
    Wang, Zhenyu
    CMC-COMPUTERS MATERIALS & CONTINUA, 2024, 80 (01): : 1521 - 1542
  • [34] Leveraging Supplementary Information for Multi-Modal Fake News Detection
    Ho, Chia-Chun
    Dai, Bi-Ru
    2023 INTERNATIONAL CONFERENCE ON INFORMATION AND COMMUNICATION TECHNOLOGIES FOR DISASTER MANAGEMENT, ICT-DM, 2023, : 50 - 54
  • [35] Fake News Detection Based on Multi-Modal Classifier Ensemble
    Shao, Yi
    Sun, Jiande
    Zhang, Tianlin
    Jiang, Ye
    Ma, Jianhua
    Li, Jing
    1ST ACM INTERNATIONAL WORKSHOP ON MULTIMEDIA AI AGAINST DISINFORMATION, MAD 2022, 2022, : 78 - 86
  • [36] Uncertainty-Aware Multi-modal Learning via Cross-Modal Random Network Prediction
    Wang, Hu
    Zhang, Jianpeng
    Chen, Yuanhong
    Ma, Congbo
    Avery, Jodie
    Hull, Louise
    Carneiro, Gustavo
    COMPUTER VISION, ECCV 2022, PT XXXVII, 2022, 13697 : 200 - 217
  • [37] Cross-modal guiding and reweighting network for multi-modal RSVP-based target detection
    Mao, Jiayu
    Qiu, Shuang
    Wei, Wei
    He, Huiguang
    NEURAL NETWORKS, 2023, 161 : 65 - 82
  • [38] Learning Frequency-Aware Cross-Modal Interaction for Multimodal Fake News Detection
    Bai, Yan
    Liu, Yanfeng
    Li, Yongjun
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2024, 11 (05): : 6568 - 6579
  • [39] Self-Supervised Multi-Modal Knowledge Graph Contrastive Hashing for Cross-Modal Search
    Liang, Meiyu
    Du, Junping
    Liang, Zhengyang
    Xing, Yongwang
    Huang, Wei
    Xue, Zhe
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 12, 2024, : 13744 - 13753
  • [40] A Multi-Reading Habits Fusion Adversarial Network for Multi-Modal Fake News Detection
    Wang, Bofan
    Zhang, Shenwu
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2024, 15 (07) : 403 - 413