Foreformer: an enhanced transformer-based framework for multivariate time series forecasting

被引:14
|
作者
Yang, Ye [1 ]
Lu, Jiangang [1 ,2 ]
机构
[1] Zhejiang Univ, Coll Control Sci & Engn, State Key Lab Ind Control Technol, Hangzhou 310027, Peoples R China
[2] Zhejiang Lab, Hangzhou 311121, Peoples R China
关键词
Multivariate time series forecasting; Attention mechanism; Deep learning; Multi-resolution; Static covariate; Transformer; CONVOLUTIONAL NETWORKS;
D O I
10.1007/s10489-022-04100-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multivariate time series forecasting (MTSF) has been extensively studied throughout years with ubiquitous applications in finance, traffic, environment, etc. Recent investigations have demonstrated the potential of Transformer to improve the forecasting performance. Transformer, however, has limitations that prohibit it from being directly applied to MTSF, such as insufficient extraction of temporal patterns at different time scales, extraction of irrelevant information in the self-attention, and no targeted processing of static covariates. Motivated by above, an enhanced Transformer-based framework for MTSF is proposed, named Foreformer, with three distinctive characteristics: (i) a multi-temporal resolution module that deeply captures temporal patterns at different scales, (ii) an explicit sparse attention mechanism forces model to prioritize the most contributive components, and (iii) a static covariates processing module for nonlinear processing of static covariates. Extensive experiments on three real-world datasets demonstrate that Foreformer outperforms existing methodologies, making it a reliable approach for MTSF tasks.
引用
收藏
页码:12521 / 12540
页数:20
相关论文
共 50 条
  • [21] Unsupervised Anomaly Detection in Multivariate Time Series through Transformer-based Variational Autoencoder
    Zhang, Hongwei
    Xia, Yuanqing
    Yan, Tijin
    Liu, Guiyang
    PROCEEDINGS OF THE 33RD CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2021), 2021, : 281 - 286
  • [22] How Features Benefit: Parallel Series Embedding for Multivariate Time Series Forecasting with Transformer
    Feng, Xuande
    Lyu, Zonglin
    2022 IEEE 34TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, ICTAI, 2022, : 967 - 975
  • [23] Memory-based Transformer with shorter window and longer horizon for multivariate time series forecasting
    Liu, Yang
    Wang, Zheng
    Yu, Xinyang
    Chen, Xin
    Sun, Meijun
    PATTERN RECOGNITION LETTERS, 2022, 160 : 26 - 33
  • [24] Heterogeneous Graph Transformer Auto-Encoder for multivariate time series forecasting
    Ye, Hongjiang
    Sun, Ying
    Gao, Yu
    Xu, Feiyi
    Qi, Jin
    COMPUTERS & ELECTRICAL ENGINEERING, 2025, 122
  • [25] Spatial-Temporal Convolutional Transformer Network for Multivariate Time Series Forecasting
    Huang, Lei
    Mao, Feng
    Zhang, Kai
    Li, Zhiheng
    SENSORS, 2022, 22 (03)
  • [26] Traffic Transformer: Transformer-based framework for temporal traffic accident prediction
    Al-Thani, Mansoor G.
    Sheng, Ziyu
    Cao, Yuting
    Yang, Yin
    AIMS MATHEMATICS, 2024, 9 (05): : 12610 - 12629
  • [27] Multivariate time series anomaly detection via separation, decomposition, and dual transformer-based autoencoder
    Fu, Shiyuan
    Gao, Xin
    Li, Baofeng
    Zhai, Feng
    Lu, Jiansheng
    Xue, Bing
    Yu, Jiahao
    Xiao, Chun
    APPLIED SOFT COMPUTING, 2024, 159
  • [28] Forecasting chaotic time series: Comparative performance of LSTM-based and Transformer-based neural network
    Valle, Joao
    Bruno, Odemir Martinez
    CHAOS SOLITONS & FRACTALS, 2025, 192
  • [29] DTIN: Dual Transformer-based Imputation Nets for multivariate time series emitter missing data
    Sun, Ziyue
    Li, Haozhe
    Wang, Wenhai
    Liu, Jiaqi
    Liu, Xinggao
    KNOWLEDGE-BASED SYSTEMS, 2024, 284
  • [30] Frequency-Enhanced Transformer with Symmetry-Based Lightweight Multi-Representation for Multivariate Time Series Forecasting
    Wang, Chenyue
    Zhang, Zhouyuan
    Wang, Xin
    Liu, Mingyang
    Chen, Lin
    Pi, Jiatian
    SYMMETRY-BASEL, 2024, 16 (07):