An improved self-attention for long-sequence time-series data forecasting with missing values

被引:1
|
作者
Zhang, Zhi-cheng [1 ]
Wang, Yong [1 ]
Peng, Jian-jian [1 ]
Duan, Jun-ting [1 ]
机构
[1] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Ctr Cyber Secur, 2006 Xiyuan Ave, Chengdu 611731, Sichuan, Peoples R China
来源
NEURAL COMPUTING & APPLICATIONS | 2024年 / 36卷 / 08期
关键词
Long-sequence time-series forecasting; Incomplete time-series data sequences; Self-attention; Transformers;
D O I
10.1007/s00521-023-09347-6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Long-sequence time-series data forecasting based on deep learning has been applied in many practical scenarios. However, the time-series data sequences obtained in the real world inevitably contain missing values due to the failures of sensors or network fluctuations. Current research works dedicate to imputing the incomplete time-series data sequence during the data preprocessing stage, which will lead to the problems of unsynchronized prediction and error accumulation. In this article, we propose an improved multi-headed self-attention mechanism, DecayAttention, which can be applied to the existing X-former models to handle the missing values in the time-series data sequences without decreasing their prediction accuracy. We apply DecayAttention to Transformer and two state-of-the-art X-former models, and the best prediction accuracy improves by 8.2%.
引用
收藏
页码:3921 / 3940
页数:20
相关论文
共 50 条
  • [1] An improved self-attention for long-sequence time-series data forecasting with missing values
    Zhi-cheng Zhang
    Yong Wang
    Jian-jian Peng
    Jun-ting Duan
    Neural Computing and Applications, 2024, 36 (8) : 3921 - 3940
  • [2] Rethink the Top-u Attention in Sparse Self-attention for Long Sequence Time-Series Forecasting
    Meng, Xiangxu
    Li, Wei
    Gaber, Tarek
    Zhao, Zheng
    Chen, Chuhao
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT VI, 2023, 14259 : 256 - 267
  • [3] CTFNet: Long-Sequence Time-Series Forecasting Based on Convolution and TimeFrequency Analysis
    Zhang, Zhiqiang
    Chen, Yuxuan
    Zhang, Dandan
    Qian, Yining
    Wang, Hongbing
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (11) : 16368 - 16382
  • [4] A hybrid framework for multivariate long-sequence time series forecasting
    Wang, Xiaohu
    Wang, Yong
    Peng, Jianjian
    Zhang, Zhicheng
    Tang, Xueliang
    APPLIED INTELLIGENCE, 2023, 53 (11) : 13549 - 13568
  • [5] A hybrid framework for multivariate long-sequence time series forecasting
    Xiaohu Wang
    Yong Wang
    Jianjian Peng
    Zhicheng Zhang
    Xueliang Tang
    Applied Intelligence, 2023, 53 : 13549 - 13568
  • [6] Double-Layer Attention for Long Sequence Time-Series Forecasting
    Ma, Jiasheng
    Wan, Xiaoye
    Xiao, Yingyuan
    DATABASE AND EXPERT SYSTEMS APPLICATIONS, DEXA 2023, PT II, 2023, 14147 : 230 - 244
  • [7] Uncertainty-Aware Self-Attention Model for Time Series Prediction with Missing Values
    Li, Jiabao
    Wang, Chengjun
    Su, Wenhang
    Ye, Dongdong
    Wang, Ziyang
    FRACTAL AND FRACTIONAL, 2025, 9 (03)
  • [8] Enformer: Encoder-Based Sparse Periodic Self-Attention Time-Series Forecasting
    Wang, Na
    Zhao, Xianglian
    IEEE ACCESS, 2023, 11 : 112004 - 112014
  • [9] Time-Series Forecasting Through Contrastive Learning with a Two-Dimensional Self-attention Mechanism
    Jiang, Linling
    Zhang, Fan
    Zhang, Mingli
    Zhang, Caiming
    NEURAL INFORMATION PROCESSING, ICONIP 2023, PT II, 2024, 14448 : 147 - 165
  • [10] Bridging Self-Attention and Time Series Decomposition for Periodic Forecasting
    Jiang, Song
    Syed, Tahin
    Zhu, Xuan
    Levy, Joshua
    Aronchik, Boris
    Sun, Yizhou
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 3202 - 3211