Multi-modal Adversarial Training for Crisis-related Data Classification on Social Media

被引:3
|
作者
Chen, Qi [1 ]
Wang, Wei [1 ]
Huang, Kaizhu [2 ]
De, Suparna [3 ]
Coenen, Frans [4 ]
机构
[1] Xian Jiaotong Liverpool Univ, Dept Comp Sci & Software Engn, Suzhou, Peoples R China
[2] Xian Jiaotong Liverpool Univ, Dept Elect & Elect Engn, Suzhou, Peoples R China
[3] Univ Winchester, Comp Sci & Networks Dept Digital Technol, Winchester, Hants, England
[4] Univ Liverpool, Dept Comp Sci, Liverpool, Merseyside, England
关键词
Adversarial training; Crisis-related data classification; Convolutional neural network; Smart city; Deep learning;
D O I
10.1109/SMARTCOMP50058.2020.00051
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Social media platforms such as Twitter are increasingly used to collect data of all kinds. During natural disasters, users may post text and image data on social media platforms to report information about infrastructure damage, injured people, cautions and warnings. Effective processing and analysing tweets in real time can help city organisations gain situational awareness of the affected citizens and take timely operations. With the advances in deep learning techniques, recent studies have significantly improved the performance in classifying crisis-related tweets. However, deep learning models are vulnerable to adversarial examples, which may be imperceptible to the human, but can lead to model's misclassification. To process multi-modal data as well as improve the robustness of deep learning models, we propose a multi-modal adversarial training method for crisis-related tweets classification in this paper. The evaluation results clearly demonstrate the advantages of the proposed model in improving the robustness of tweet classification.
引用
收藏
页码:232 / 237
页数:6
相关论文
共 50 条
  • [31] Multi-modal affine fusion network for social media rumor detection
    Fu, Boyang
    Sui, Jie
    PEERJ COMPUTER SCIENCE, 2022, 8
  • [32] Multi-modal Semantic Inconsistency Detection in Social Media News Posts
    McCrae, Scott
    Wang, Kehan
    Zakhor, Avideh
    MULTIMEDIA MODELING, MMM 2022, PT II, 2022, 13142 : 331 - 343
  • [33] Multi-modal Feature Fistillation Emotion Recognition Method For Social Media
    Chang, Xue
    Wang, Mingjiang
    Deng, Xiao
    2024 IEEE 24TH INTERNATIONAL CONFERENCE ON SOFTWARE QUALITY, RELIABILITY AND SECURITY, QRS, 2024, : 445 - 454
  • [34] A Multi-Modal Approach for the Detection of Account Anonymity on Social Media Platforms
    Wang, Bo
    Guo, Jie
    Huang, Zheng
    Qiu, Weidong
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [35] MOBILE APPLICATIONS FOR TRAINING MULTI-MODAL MOBILITY USING SOCIAL NETWORKS
    Suciu, George
    Butca, Cristina
    Necula, Lucian
    ELEARNING VISION 2020!, VOL II, 2016, : 460 - 466
  • [36] MULTI-MODAL APPROACH TO INDEXING AND CLASSIFICATION
    SWIFT, DF
    WINN, VA
    BRAMER, DA
    INTERNATIONAL CLASSIFICATION, 1977, 4 (02): : 90 - 94
  • [37] Multi-modal Semantic Place Classification
    Pronobis, A.
    Mozos, O. Martinez
    Caputo, B.
    Jensfelt, P.
    INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2010, 29 (2-3): : 298 - 320
  • [38] A Hierarchical Correlation Model for Multi-modal Sentiment Analysis on Social Media
    Lin, Dazhen
    Li, Lingxiao
    Cao, Donglin
    Li, Shaozi
    12TH CHINESE CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK AND SOCIAL COMPUTING (CHINESECSCW 2017), 2017, : 41 - 47
  • [39] Popularity Prediction of Social Media based on Multi-Modal Feature Mining
    Hsu, Chih-Chung
    Kang, Li-Wei
    Lee, Chia-Yen
    Lee, Jun-Yi
    Zhang, Zhong-Xuan
    Wu, Shao-Min
    PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 2687 - 2691
  • [40] Multi-modal long document classification based on Hierarchical Prompt and Multi-modal Transformer
    Liu, Tengfei
    Hu, Yongli
    Gao, Junbin
    Wang, Jiapu
    Sun, Yanfeng
    Yin, Baocai
    NEURAL NETWORKS, 2024, 176