Social Self-Attention Generative Adversarial Networks for Human Trajectory Prediction

被引:2
|
作者
Yang C. [1 ]
Pan H. [1 ]
Sun W. [1 ]
Gao H. [1 ]
机构
[1] Harbin Institute of Technology, Research Institute of Intelligent Control and Systems, Harbin
来源
关键词
Generative adversarial networks (GANs); self-attention; social interactions; trajectory prediction;
D O I
10.1109/TAI.2023.3299899
中图分类号
学科分类号
摘要
Predicting accurate human future trajectories is of critical importance for self-driving vehicles if they are to navigate complex scenarios. Trajectories of humans are not only dependent on the humans themselves, but also the interactions with surrounding agents. Previous works mainly model interactions among agents by using a diversity of polymerization methods that integrate various learned agent states hit or miss. In this article, we propose social self-attention generative adversarial networks (Social SAGAN), which generate socially acceptable multimodal trajectory predictions. Social SAGAN incorporates a generator that predicts future trajectories of pedestrians, a discriminator that classifies trajectory predictions as real or fake, and a social self-attention mechanism that selectively refines the most interactive information and helps the overall model to capture what to pay attention to. Through extensive experiments, we demonstrate that our model achieves competitive prediction accuracy and computational complexity compared with previous state-of-the-art methods on all trajectory forecasting benchmarks. © 2020 IEEE.
引用
收藏
页码:1805 / 1815
页数:10
相关论文
共 50 条
  • [31] SAPCGAN: Self-Attention based Generative Adversarial Network for Point Clouds
    Li, Yushi
    Baciu, George
    PROCEEDINGS OF 2020 IEEE 19TH INTERNATIONAL CONFERENCE ON COGNITIVE INFORMATICS & COGNITIVE COMPUTING (ICCI*CC 2020), 2020, : 52 - 59
  • [32] Vector Decomposition of Elastic Seismic Wavefields Using Self-Attention Deep Convolutional Generative Adversarial Networks
    Liu, Wei
    Cao, Junxing
    You, Jiachun
    Wang, Haibo
    APPLIED SCIENCES-BASEL, 2023, 13 (16):
  • [33] Stock Volatility Prediction Based on Self-attention Networks with Social Information
    Zheng, Jie
    Xia, Andi
    Shao, Lin
    Wan, Tao
    Qin, Zengchang
    2019 IEEE CONFERENCE ON COMPUTATIONAL INTELLIGENCE FOR FINANCIAL ENGINEERING & ECONOMICS (CIFER 2019), 2019, : 134 - 140
  • [34] Missing Data Imputation for Online Monitoring of Power Equipment Based on Self-attention Generative Adversarial Networks
    Zhou Y.
    Lin M.
    Chen J.
    Bai Z.
    Chen M.
    Gaodianya Jishu/High Voltage Engineering, 2023, 49 (05): : 1795 - 1809
  • [35] Application of self-attention conditional deep convolutional generative adversarial networks in the fault diagnosis of planetary gearboxes
    Luo, Jia
    Huang, Jingying
    Ma, Jiancheng
    Liu, Siyuan
    PROCEEDINGS OF THE INSTITUTION OF MECHANICAL ENGINEERS PART O-JOURNAL OF RISK AND RELIABILITY, 2024, 238 (02) : 260 - 273
  • [36] RESAKey GAN: enhancing color image encryption through residual self-attention generative adversarial networks
    Liu, Tongzhe
    Chen, Junyao
    Wu, Ximei
    Long, Bofeng
    Wang, Lujie
    He, Chenchen
    Deng, Xuan
    Deng, Hongwei
    Chen, Zhong
    PHYSICA SCRIPTA, 2025, 100 (03)
  • [37] A Unified Generative Adversarial Network Training via Self-Labeling and Self-Attention
    Watanabe, Tomoki
    Favaro, Paolo
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [38] Self-Attention Generative Adversarial Network Interpolating and Denoising Seismic Signals Simultaneously
    Ding, Mu
    Zhou, Yatong
    Chi, Yue
    REMOTE SENSING, 2024, 16 (02)
  • [39] QAR Data Imputation Using Generative Adversarial Network with Self-Attention Mechanism
    Zhao, Jingqi
    Rong, Chuitian
    Dang, Xin
    Sun, Huabo
    BIG DATA MINING AND ANALYTICS, 2024, 7 (01): : 12 - 28
  • [40] A froth image segmentation method via generative adversarial networks with multi-scale self-attention mechanism
    Zhong, Yuze
    Tang, Zhaohui
    Zhang, Hu
    Xie, Yongfang
    Gao, Xiaoliang
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (07) : 19663 - 19682