Time Series Forecasting using Sequence-to-Sequence Deep Learning Framework

被引:39
|
作者
Du, Shengdong [1 ]
Li, Tianrui [1 ]
Horng, Shi-Jinn [2 ]
机构
[1] Southwest Jiaotong Univ, Sch Informat Sci & Technol, Chengdu 611756, Sichuan, Peoples R China
[2] Natl Taiwan Univ Sci & Technol, Dept Comp Sci & Informat Engn, Taipei, Taiwan
基金
中国国家自然科学基金;
关键词
Time series forecasting; LSTM; Encoder-decoder; PM2.5; Sequence-to-sequence deep learning; HYBRID;
D O I
10.1109/PAAP.2018.00037
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Time series forecasting has been regarded as a key research problem in various fields. such as financial forecasting, traffic flow forecasting, medical monitoring, intrusion detection, anomaly detection, and air quality forecasting etc. In this paper, we propose a sequence-to-sequence deep learning framework for multivariate time series forecasting, which addresses the dynamic, spatial-temporal and nonlinear characteristics of multivariate time series data by LSTM based encoder-decoder architecture. Through the air quality multivariate time series forecasting experiments, we show that the proposed model has better forecasting performance than classic shallow learning and baseline deep learning models. And the predicted PM2.5 value can be well matched with the ground truth value under single timestep and multi-timestep forward forecasting conditions. The experiment results show that our model is capable of dealing with multivariate time series forecasting with satisfied accuracy.
引用
收藏
页码:171 / 176
页数:6
相关论文
共 50 条
  • [41] Voice Conversion Using Sequence-to-Sequence Learning of Context Posterior Probabilities
    Miyoshi, Hiroyuki
    Saito, Yuki
    Takamichi, Shinnosuke
    Saruwatari, Hiroshi
    18TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2017), VOLS 1-6: SITUATED INTERACTION, 2017, : 1268 - 1272
  • [42] Sequence-to-Sequence Learning with Latent Neural Grammars
    Kim, Yoon
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [43] DUALFORMER: A UNIFIED BIDIRECTIONAL SEQUENCE-TO-SEQUENCE LEARNING
    Chien, Jen-Tzung
    Chang, Wei-Hsiang
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 7718 - 7722
  • [44] OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
    Wang, Peng
    Yang, An
    Men, Rui
    Lin, Junyang
    Bai, Shuai
    Li, Zhikang
    Ma, Jianxin
    Zhou, Chang
    Zhou, Jingren
    Yang, Hongxia
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [45] Forecasting Photovoltaic Power Production using a Deep Learning Sequence to Sequence Model with Attention
    Kharlova, Elizaveta
    May, Daniel
    Musilek, Petr
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [46] Explainable sequence-to-sequence GRU neural network for pollution forecasting
    Sara Mirzavand Borujeni
    Leila Arras
    Vignesh Srinivasan
    Wojciech Samek
    Scientific Reports, 13
  • [47] Explainable sequence-to-sequence GRU neural network for pollution forecasting
    Borujeni, Sara Mirzavand
    Arras, Leila
    Srinivasan, Vignesh
    Samek, Wojciech
    SCIENTIFIC REPORTS, 2023, 13 (01)
  • [48] The impact of memory on learning sequence-to-sequence tasks
    Seif, Alireza
    Loos, Sarah A. M.
    Tucci, Gennaro
    Roldan, Edgar
    Goldt, Sebastian
    MACHINE LEARNING-SCIENCE AND TECHNOLOGY, 2024, 5 (01):
  • [49] Sequence-to-sequence modeling for graph representation learning
    Aynaz Taheri
    Kevin Gimpel
    Tanya Berger-Wolf
    Applied Network Science, 4
  • [50] A Fuzzy Training Framework for Controllable Sequence-to-Sequence Generation
    Li, Jiajia
    Wang, Ping
    Li, Zuchao
    Liu, Xi
    Utiyama, Masao
    Sumita, Eiichiro
    Zhao, Hai
    Ai, Haojun
    IEEE ACCESS, 2022, 10 : 92467 - 92480