Text-to-Traffic Generative Adversarial Network for Traffic Situation Generation

被引:7
|
作者
Huo, Guangyu [1 ]
Zhang, Yong [1 ]
Wang, Boyue [1 ]
Hu, Yongli [1 ]
Yin, Baocai [1 ]
机构
[1] Beijing Univ Technol BJUT, Fac Informat Technol, Beijing Municipal Key Lab Multimedia & Intelligen, Beijing 100124, Peoples R China
基金
中国国家自然科学基金;
关键词
Traffic situation generation; text-to-image translation; condition generative adversarial network; FLOW;
D O I
10.1109/TITS.2021.3136143
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Traffic situation generation is of importance in the intelligent transportation field, evaluating and simulating the macroscopic traffic conditions. The government often uses the historical traffic data on the same weekday to analyze the future traffic situations, which works unfavorably due to some traffic-related information deficiency, such as weather, location, traffic accidents, social activities and so on. Therefore, how to accurately generate the traffic situation is a challenging problem. Fortunately, massive traffic-related information spread in social media often indicates the traffic situation variation trend, which provides the sufficient information for the traffic situation generation. In this paper, we propose a novel Text-to-Traffic generative adversarial network framework (T(2)GAN), which fuses the traffic data and the semantic information collected from social media to generate the traffic situation. To reduce the huge gap between the above two modalities and improve the authenticity of the generated traffic situation, we raise a global-local loss. Additionally, we build a heterogeneous dataset containing the traffic-related text data collected from social media and the corresponding traffic passenger flow data. Experimental results show that the proposed methods are obviously better than many outstanding traffic situation generation methods based on neural networks.
引用
收藏
页码:2623 / 2636
页数:14
相关论文
共 50 条
  • [11] Network Traffic Anomaly Detection Based on Generative Adversarial Network and Transformer
    Wang, Zhurong
    Zhou, Jing
    Hei, Xinhong
    ADVANCES IN NATURAL COMPUTATION, FUZZY SYSTEMS AND KNOWLEDGE DISCOVERY, ICNC-FSKD 2022, 2023, 153 : 228 - 235
  • [12] Road traffic network state prediction based on a generative adversarial network
    Xu, Dongwei
    Peng, Peng
    Wei, Chenchen
    He, Defeng
    Xuan, Qi
    IET INTELLIGENT TRANSPORT SYSTEMS, 2020, 14 (10) : 1286 - 1294
  • [13] Customizable text generation via conditional text generative adversarial network
    Chen, Jinyin
    Wu, Yangyang
    Jia, Chengyu
    Zheng, Haibin
    Huang, Guohan
    NEUROCOMPUTING, 2020, 416 (416) : 125 - 135
  • [14] Wasserstein Generative Adversarial Networks for Realistic Traffic Sign Image Generation
    Dewi, Christine
    Chen, Rung-Ching
    Liu, Yan-Ting
    INTELLIGENT INFORMATION AND DATABASE SYSTEMS, ACIIDS 2021, 2021, 12672 : 479 - 493
  • [15] Traffic Accident Data Generation Based on Improved Generative Adversarial Networks
    Chen, Zhijun
    Zhang, Jingming
    Zhang, Yishi
    Huang, Zihao
    SENSORS, 2021, 21 (17)
  • [16] Synthetic Traffic Sign Image Generation Applying Generative Adversarial Networks
    Dewi, Christine
    Chen, Rung-Ching
    Liu, Yan-Ting
    VIETNAM JOURNAL OF COMPUTER SCIENCE, 2022, 09 (03) : 333 - 348
  • [17] A rapid approach to urban traffic noise mapping with a generative adversarial network
    Yang, Xinhao
    Han, Zhen
    Lu, Xiaodong
    Zhang, Yuan
    APPLIED ACOUSTICS, 2025, 228
  • [18] Traffic identification model based on generative adversarial deep convolutional network
    Dong, Shi
    Xia, Yuanjun
    Peng, Tao
    ANNALS OF TELECOMMUNICATIONS, 2022, 77 (9-10) : 573 - 587
  • [19] DPNeT: Differentially Private Network Traffic Synthesis with Generative Adversarial Networks
    Fan, Liyue
    Pokkunuru, Akarsh
    DATA AND APPLICATIONS SECURITY AND PRIVACY XXXV, 2021, 12840 : 3 - 21
  • [20] Traffic identification model based on generative adversarial deep convolutional network
    Shi Dong
    Yuanjun Xia
    Tao Peng
    Annals of Telecommunications, 2022, 77 : 573 - 587