Multivariate Time Series Forecasting Using Multiscale Recurrent Networks With Scale Attention and Cross-Scale Guidance

被引:8
作者
Guo, Qiang [1 ,2 ]
Fang, Lexin [3 ]
Wang, Ren [3 ]
Zhang, Caiming [3 ,4 ]
机构
[1] Shandong Univ Finance & Econ, Sch Comp Sci, Jinan 250014, Peoples R China
[2] Shandong Univ Finance & Econ, Shandong Prov Key Lab Digital Media Technol, Jinan 250014, Peoples R China
[3] Shandong Univ, Sch Software, Jinan 250100, Peoples R China
[4] Shandong Prov Lab Future Intelligence & Financial, Yantai 264005, Peoples R China
基金
中国国家自然科学基金;
关键词
Attention mechanism; cross-scale guidance; multiscale decomposition; recurrent neural networks (RNNs); time series forecasting; SINGULAR SPECTRUM ANALYSIS;
D O I
10.1109/TNNLS.2023.3326140
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multivariate time series (MTS) forecasting is considered as a challenging task due to complex and nonlinear interdependencies between time steps and series. With the advance of deep learning, significant efforts have been made to model long-term and short-term temporal patterns hidden in historical information by recurrent neural networks (RNNs) with a temporal attention mechanism. Although various forecasting models have been developed, most of them are single-scale oriented, resulting in scale information loss. In this article, we seamlessly integrate multiscale analysis into deep learning frameworks to build scale-aware recurrent networks and propose two multiscale recurrent network (MRN) models for MTS forecasting. The first model called MRN-SA adopts a scale attention mechanism to dynamically select the most relevant information from different scales and simultaneously employs input attention and temporal attention to make predictions. The second one named as MRN-CSG introduces a novel cross-scale guidance mechanism to exploit the information from coarse scale to guide the decoding process at fine scale, which results in a lightweight and more easily trained model without obvious loss of accuracy. Extensive experimental results demonstrate that both MRN-SA and MRN-CSG can achieve state-of-the-art performance on five typical MTS datasets in different domains. The source codes will be publicly available at https://github.com/qguo2010/MRN.
引用
收藏
页码:540 / 554
页数:15
相关论文
共 46 条
  • [1] Abadi M, 2016, PROCEEDINGS OF OSDI'16: 12TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION, P265
  • [2] Bahdanau D, 2016, Arxiv, DOI arXiv:1409.0473
  • [3] LSTM-MSNet: Leveraging Forecasts on Sets of Related Time Series With Multiple Seasonal Patterns
    Bandara, Kasun
    Bergmeir, Christoph
    Hewamalage, Hansika
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (04) : 1586 - 1599
  • [4] LEARNING LONG-TERM DEPENDENCIES WITH GRADIENT DESCENT IS DIFFICULT
    BENGIO, Y
    SIMARD, P
    FRASCONI, P
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 1994, 5 (02): : 157 - 166
  • [5] Box G.E.P., 2016, Time Series Analysis: Forecasting and Control, V5
  • [6] Support vector machine with adaptive parameters in financial time series forecasting
    Cao, LJ
    Tay, FEH
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 2003, 14 (06): : 1506 - 1518
  • [7] Tensor Networks for Dimensionality Reduction and Large-Scale Optimization Part 1 Low-Rank Tensor Decompositions
    Cichocki, Andrzej
    Lee, Namgil
    Oseledets, Ivan
    Anh-Huy Phan
    Zhao, Qibin
    Mandic, Danilo P.
    [J]. FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2016, 9 (4-5): : I - +
  • [8] Position-Based Content Attention for Time Series Forecasting with Sequence-to-Sequence RNNs
    Cinar, Yagmur Gizem
    Mirisaee, Hamid
    Goswami, Parantapa
    Gaussier, Eric
    Ait-Bachir, Ali
    Strijov, Vadim
    [J]. NEURAL INFORMATION PROCESSING, ICONIP 2017, PT V, 2017, 10638 : 533 - 544
  • [9] Daelemans W., 2014, Learning phrase representations using RNN encoder-decoder for statistical machine translation, P1724, DOI DOI 10.3115/V1/D14-1179
  • [10] Temporal Attention-Augmented Bilinear Network for Financial Time-Series Data Analysis
    Dat Thanh Tran
    Iosifidis, Alexandros
    Kanniainen, Juho
    Gabbouj, Moncef
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2019, 30 (05) : 1407 - 1418