StackDA: A Stacked Dual Attention Neural Network for Multivariate Time-Series Forecasting

被引:0
|
作者
Hong, Jungsoo [1 ]
Park, Jinuk [1 ]
Park, Sanghyun [1 ]
机构
[1] Yonsei Univ, Dept Comp Sci, Seoul 03722, South Korea
关键词
Forecasting; Predictive models; Decoding; Training; Data models; Noise reduction; Deep learning; Attention mechanism; autoregressive model; denoising training; multi-step forecasting; multivariate time-series forecasting;
D O I
10.1109/ACCESS.2021.3122910
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multivariate time-series forecasting derives key seasonality from past patterns to predict future time-series. Multi-step forecasting is crucial in the industrial sector because a continuous perspective leads to more effective decisions. However, because it depends on previous prediction values, multi-step forecasting is highly unstable. To mitigate this problem, we introduce a novel model, named stacked dual attention neural network (StackDA), based on an encoder-decoder. In dual attention, the initial attention is for the time dependency between the encoder and decoder, and the second attention is for the time dependency in the decoder time steps. We stack dual attention to stabilize the long-term dependency and multi-step forecasting problem. We add an autoregression component to resolve the lack of linear properties because our method is based on a nonlinear neural network model. Unlike the conventional autoregressive model, we propose skip autoregressive to deal with multiple seasonalities. Furthermore, we propose a denoising training method to take advantage of both the teacher forcing and without teacher forcing methods. We adopt multi-head fully connected layers for the variable-specific modeling owing to our multivariate time-series data. We add positional encoding to provide the model with time information to recognize seasonality more accurately. We compare our model performance with that of machine learning and deep learning models to verify our approach. Finally, we conduct various experiments, including an ablation study, a seasonality determination test, and a stack attention test, to demonstrate the performance of StackDA.
引用
收藏
页码:145955 / 145967
页数:13
相关论文
共 50 条
  • [21] A deep multivariate time series multistep forecasting network
    Yin, Chenrui
    Dai, Qun
    APPLIED INTELLIGENCE, 2022, 52 (08) : 8956 - 8974
  • [22] A multivariate heuristic model for fuzzy time-series forecasting
    Huarng, Kun-Huang
    Yu, Tiffany Hui-Kuang
    Hsu, Yu Wei
    IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2007, 37 (04): : 836 - 846
  • [23] A deep multivariate time series multistep forecasting network
    Chenrui Yin
    Qun Dai
    Applied Intelligence, 2022, 52 : 8956 - 8974
  • [24] MTSMAE: Masked Autoencoders for Multivariate Time-Series Forecasting
    Tang, Peiwang
    Zhang, Xianchao
    2022 IEEE 34TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, ICTAI, 2022, : 982 - 989
  • [25] Forecasting Video QoE With Deep Learning From Multivariate Time-Series
    Dinaki, Hossein Ebrahimi
    Shirmohammadi, Shervin
    Janulewicz, Emil
    Cote, David
    IEEE OPEN JOURNAL OF SIGNAL PROCESSING, 2021, 2 : 512 - 521
  • [26] DEformer: Dual Embedded Transformer for Multivariate Time Series Forecasting
    Kim, Minje
    Lee, Suwon
    Choi, Sang-Min
    IEEE ACCESS, 2024, 12 : 153851 - 153858
  • [27] Quarterly time-series forecasting with neural networks
    Zhang, G. Peter
    Kline, Douglas M.
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2007, 18 (06): : 1800 - 1814
  • [28] Cluster-aware attentive convolutional recurrent network for multivariate time-series forecasting
    Bai, Simeng
    Zhang, Qi
    He, Hui
    Hu, Liang
    Wang, Shoujin
    Niu, Zhendong
    NEUROCOMPUTING, 2023, 558
  • [29] Temporal pattern attention for multivariate time series forecasting
    Shun-Yao Shih
    Fan-Keng Sun
    Hung-yi Lee
    Machine Learning, 2019, 108 : 1421 - 1441
  • [30] Temporal pattern attention for multivariate time series forecasting
    Shih, Shun-Yao
    Sun, Fan-Keng
    Lee, Hung-yi
    MACHINE LEARNING, 2019, 108 (8-9) : 1421 - 1441