Multi-modal Adversarial Training for Crisis-related Data Classification on Social Media

被引:3
|
作者
Chen, Qi [1 ]
Wang, Wei [1 ]
Huang, Kaizhu [2 ]
De, Suparna [3 ]
Coenen, Frans [4 ]
机构
[1] Xian Jiaotong Liverpool Univ, Dept Comp Sci & Software Engn, Suzhou, Peoples R China
[2] Xian Jiaotong Liverpool Univ, Dept Elect & Elect Engn, Suzhou, Peoples R China
[3] Univ Winchester, Comp Sci & Networks Dept Digital Technol, Winchester, Hants, England
[4] Univ Liverpool, Dept Comp Sci, Liverpool, Merseyside, England
关键词
Adversarial training; Crisis-related data classification; Convolutional neural network; Smart city; Deep learning;
D O I
10.1109/SMARTCOMP50058.2020.00051
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Social media platforms such as Twitter are increasingly used to collect data of all kinds. During natural disasters, users may post text and image data on social media platforms to report information about infrastructure damage, injured people, cautions and warnings. Effective processing and analysing tweets in real time can help city organisations gain situational awareness of the affected citizens and take timely operations. With the advances in deep learning techniques, recent studies have significantly improved the performance in classifying crisis-related tweets. However, deep learning models are vulnerable to adversarial examples, which may be imperceptible to the human, but can lead to model's misclassification. To process multi-modal data as well as improve the robustness of deep learning models, we propose a multi-modal adversarial training method for crisis-related tweets classification in this paper. The evaluation results clearly demonstrate the advantages of the proposed model in improving the robustness of tweet classification.
引用
收藏
页码:232 / 237
页数:6
相关论文
共 50 条
  • [41] AMAE: Adversarial multimodal auto-encoder for crisis-related tweet analysis
    Lv, Jiandong
    Wang, Xingang
    Shao, Cuiling
    COMPUTING, 2023, 105 (01) : 13 - 28
  • [42] Task-Adversarial Adaptation for Multi-modal Recommendation
    Su, Hongzu
    Li, Jingjing
    Li, Fengling
    Zhu, Lei
    Lu, Ke
    Yang, Yang
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 6530 - 6538
  • [43] Adversarial learning for mono- or multi-modal registration
    Fan, Jingfan
    Cao, Xiaohuan
    Wang, Qian
    Yap, Pew-Thian
    Shen, Dinggang
    MEDICAL IMAGE ANALYSIS, 2019, 58
  • [44] Robust Multi-Modal Sensor Fusion: An Adversarial Approach
    Roheda, Siddharth
    Krim, Hamid
    Riggan, Benjamin S.
    IEEE SENSORS JOURNAL, 2021, 21 (02) : 1885 - 1896
  • [45] AMAE: Adversarial multimodal auto-encoder for crisis-related tweet analysis
    Jiandong Lv
    Xingang Wang
    Cuiling Shao
    Computing, 2023, 105 : 13 - 28
  • [46] Multi-Modal Low-Data-Based Learning for Video Classification
    Citak, Erol
    Karsligil, Mine Elif
    APPLIED SCIENCES-BASEL, 2024, 14 (10):
  • [47] Multi-modal classification of Twitter data during disasters for humanitarian response
    Sreenivasulu Madichetty
    Sridevi Muthukumarasamy
    P. Jayadev
    Journal of Ambient Intelligence and Humanized Computing, 2021, 12 : 10223 - 10237
  • [48] Multi-Modal Meta Multi-Task Learning for Social Media Rumor Detection
    Zhang, Huaiwen
    Qian, Shengsheng
    Fang, Quan
    Xu, Changsheng
    IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 : 1449 - 1459
  • [49] Multi-modal multi-layered topic classification model for social event analysis
    Y. H. Chen
    C. Y. Yin
    Y. J. Lin
    W. L. Zuo
    Multimedia Tools and Applications, 2018, 77 : 23291 - 23315
  • [50] Multi-Modal Meta Multi-Task Learning For Social Media Rumor Detection
    Poornima, R.
    Nagavarapu, Sateesh
    Navya, Soleti
    Katkoori, Arun Kumar
    Mohsen, Karrar Shareef
    Saikumar, K.
    2024 2ND WORLD CONFERENCE ON COMMUNICATION & COMPUTING, WCONF 2024, 2024,