Automatic report generation based on multi-modal information

被引:0
|
作者
Jing Zhang
Xiaoxue Li
Weizhi Nie
Yuting Su
机构
[1] Tianjin University,School of Electronics Information Engineering
来源
关键词
News event detection; Multi-modal; Report generation;
D O I
暂无
中图分类号
学科分类号
摘要
In this paper, we propose a new framework which can utilize multi-modal social media information to automatically generate related reports for users or government. First, we utilize DBSCAN (Density Based Spatial Clustering of Applications with Noise) to detect events in official news websites. Then, some unofficial information details are extracted from social network platforms (Foursquare, Twitter, YouTube), which will be leveraged to enhance the official report in order to excavate some latent and useful information. In this process, we applied some classic textual processing methods and computer vision technologies to reduce the noise information uploaded by user generated contents (UGCs). Then, we applied LSTM-CNN model to generate the related image caption and successfully convert visual information to textual information. Finally, we extracted some latent topics using graph cluster method to generate the final report. To demonstrate the effectiveness of our framework, we got a large of multi-source event dataset from official news websites and Twitter. Finally, the user study demonstrates the practicability of our approach.
引用
收藏
页码:12005 / 12015
页数:10
相关论文
共 50 条
  • [41] Fostering multi-modal summarization for trend information
    Kato, Tsuneaki
    Matsushita, Mitsunori
    Kando, Noriko
    KNOWLEDGE-BASED INTELLIGENT INFORMATION AND ENGINEERING SYSTEMS: KES 2007 - WIRN 2007, PT II, PROCEEDINGS, 2007, 4693 : 377 - 386
  • [42] Multi-modal interaction in the age of information appliances
    Maes, SH
    Raman, TV
    2000 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, PROCEEDINGS VOLS I-III, 2000, : 15 - 18
  • [43] ART-Based Fusion of Multi-modal Information for Mobile Robots
    Berghoefer, Elmar
    Schulze, Denis
    Tscherepanow, Marko
    Wachsmuth, Sven
    ENGINEERING APPLICATIONS OF NEURAL NETWORKS, PT I, 2011, 363 : 1 - 10
  • [45] RDMIF: Reverse dictionary model based on multi-modal information fusion
    Tian, Sicheng
    Huang, Shaobin
    Li, Rongsheng
    Wei, Chi
    NEUROCOMPUTING, 2025, 619
  • [46] An automatic fusion algorithm for multi-modal medical images
    Aktar, Mst. Nargis
    Lambert, Andrew J.
    Pickering, Mark
    COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING-IMAGING AND VISUALIZATION, 2018, 6 (05): : 584 - 598
  • [47] Multi-Modal Fusion for Enhanced Automatic Modulation Classification
    Li, Yingkai
    Wang, Shufei
    Zhang, Yibin
    Huang, Hao
    Wang, Yu
    Zhang, Qianyun
    Lin, Yun
    Gui, Guan
    2024 IEEE 99TH VEHICULAR TECHNOLOGY CONFERENCE, VTC2024-SPRING, 2024,
  • [48] Automatic Group Cohesiveness Detection With Multi-modal Features
    Zhu, Bin
    Guo, Xin
    Barner, Kenneth E.
    Boncelet, Charles
    ICMI'19: PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2019, : 577 - 581
  • [49] Rebot: An Automatic Multi-modal Requirements Review Bot
    Ye, Ming
    Cao, Jicheng
    Cheng, Shengyu
    2022 IEEE INTERNATIONAL CONFERENCE ON SOFTWARE ANALYSIS, EVOLUTION AND REENGINEERING (SANER 2022), 2022, : 777 - 781
  • [50] Automatic Geographic Enrichment by Multi-modal Bike Sensing
    Verstockt, Steven
    Slavkovikj, Viktor
    De Potter, Pieterjan
    Janssens, Olivier
    Slowack, Jurgen
    Van de Walle, Rik
    E-BUSINESS AND TELECOMMUNICATIONS, ICETE 2013, 2014, 456 : 369 - 384