DSFormer-LRTC: Dynamic Spatial Transformer for Traffic Forecasting With Low-Rank Tensor Compression

被引:0
|
作者
Zhao, Jianli [1 ]
Zhuo, Futong [1 ]
Sun, Qiuxia [2 ]
Li, Qing [2 ]
Hua, Yiran [1 ]
Zhao, Jianye [1 ]
机构
[1] Shandong Univ Sci & Technol, Coll Comp Sci & Engn, Qingdao 266590, Peoples R China
[2] Shandong Univ Sci & Technol, Coll Math & Syst Sci, Qingdao 266590, Peoples R China
关键词
Traffic forecasting; deep learning; spatio-temporal prediction; tensor compression; FLOW; PREDICTION;
D O I
10.1109/TITS.2024.3436523
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Traffic flow forecasting is challenging due to the intricate spatio-temporal correlations in traffic patterns. Previous works captured spatial dependencies based on graph neural networks and used fixed graph construction methods to characterize spatial relationships, which limits the ability of models to capture dynamic and long-range spatial dependencies. Meanwhile, prior studies did not consider the issue of a large number of redundant parameters in traffic prediction models, which not only increases the storage cost of the model but also reduces its generalization ability. To address the above challenges, we propose a Dynamic Spatial Transformer for Traffic Forecasting with Low-Rank Tensor Compression (DSFormer-LRTC). Specifically, we constructed a global spatial Transformer to capture remote spatial dependencies, and a distance-based mask matrix is used in local spatial Transformer to enhance the adjacent spatial influence. To reduce the complexity of the model, the model adopts a design that separates temporal and spatial. Meanwhile, we introduce low-rank tensor decomposition to reconstruct the parameter matrix in Transformer module to compress the proposed model. Experimental results show that DSFormer-LRTC achieves state-of-the-art performance on four real-world datasets. The experimental analysis of attention matrix also proves that the model can learn dynamic and distant spatial features. Finally, the compressed model parameters reduce the original parameter size by two-thirds, while significantly outperforming the baseline model in terms of computational efficiency.
引用
收藏
页码:16323 / 16335
页数:13
相关论文
共 50 条
  • [21] LRTD: A Low-rank Transformer with Dynamic Depth and Width for Speech Recognition
    Yu, Fan
    Xi, Wei
    Yang, Zhao
    Tong, Ziye
    Sun, Jingtong
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [22] Dynamic Low-rank Estimation for Transformer-based Language Models
    Huai, Ting
    Lie, Xiao
    Gao, Shangqian
    Hsu, Yenchang
    Shen, Yilin
    Jin, Hongxia
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 9275 - 9287
  • [23] Dynamic spatial aware graph transformer for spatiotemporal traffic flow forecasting
    Li, Zequan
    Zhou, Jinglin
    Lin, Zhizhe
    Zhou, Teng
    KNOWLEDGE-BASED SYSTEMS, 2024, 297
  • [24] Dynamic Low-Rank Instance Adaptation for Universal Neural Image Compression
    Lv, Yue
    Xiang, Jinxi
    Zhang, Jun
    Yang, Wenming
    Han, Xiao
    Yang, Wei
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 632 - 642
  • [25] Spatio-temporal traffic data prediction based on low-rank tensor completion
    Zhao, Yong-Mei
    Dong, Yun-Wei
    Jiaotong Yunshu Gongcheng Xuebao/Journal of Traffic and Transportation Engineering, 2024, 24 (04): : 243 - 258
  • [26] Highly undersampling dynamic cardiac MRI based on low-rank tensor coding
    Liu, Die
    Zhou, Jinjie
    Meng, Miaomiao
    Zhang, Fan
    Zhang, Minghui
    Liu, Qiegen
    MAGNETIC RESONANCE IMAGING, 2022, 89 : 12 - 23
  • [27] Deep Unrolled Low-Rank Tensor Completion for High Dynamic Range Imaging
    Mai, Truong Thanh Nhat
    Lam, Edmund Y.
    Lee, Chul
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 5774 - 5787
  • [28] Low-rank tensor completion with spatial-spectral consistency for hyperspectral image restoration
    Xiao, Zhiwen
    Zhu, Hu
    OPTOELECTRONICS LETTERS, 2023, 19 (07) : 432 - 436
  • [29] Low-rank tensor completion with spatial-spectral consistency for hyperspectral image restoration
    Zhiwen Xiao
    Hu Zhu
    Optoelectronics Letters, 2023, 19 : 432 - 436
  • [30] Low-rank tensor completion with spatial-spectral consistency for hyperspectral image restoration
    XIAO Zhiwen
    ZHU Hu
    OptoelectronicsLetters, 2023, 19 (07) : 432 - 436