Multiscale spatial-temporal transformer with consistency representation learning for multivariate time series classification

被引:1
|
作者
Wu, Wei [1 ,2 ,3 ,4 ]
Qiu, Feiyue [1 ,3 ]
Wang, Liping [3 ]
Liu, Yanxiu [3 ]
机构
[1] Zhejiang Univ Technol, Coll Educ, 288 Liuhe Rd,Liuxia St, Hangzhou, Peoples R China
[2] Wuyi Univ, Fujian Key Lab Big Data Applicat & Intellectualiza, Wuyishan, Peoples R China
[3] Zhejiang Univ Technol, Coll Comp Sci & Technol, Hangzhou, Peoples R China
[4] Wuyi Univ, Key Lab Cognit Comp & Intelligent Informat Proc Fu, Jiangmen, Fujian, Peoples R China
关键词
representation learning; spatial-temporal consistency; time series classification; transformer; NETWORKS; MECHANISM;
D O I
10.1002/cpe.8234
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Multivariate time series classification holds significant importance in fields such as healthcare, energy management, and industrial manufacturing. Existing research focuses on capturing temporal changes or calculating time similarities to accomplish classification tasks. However, as the state of the system changes, capturing spatial-temporal consistency within multivariate time series is key to the ability of the model to classify accurately. This paper proposes the MSTformer model, specifically designed for multivariate time series classification tasks. Based on the Transformer architecture, this model uniquely focuses on multiscale information across both time and feature dimensions. The encoder, through a designed learnable multiscale attention mechanism, divides data into sequences of varying temporal scales to learn multiscale temporal features. The decoder, which receives the spatial view of the data, utilizes a dynamic scale attention mechanism to learn spatial-temporal consistency in a one-dimensional space. In addition, this paper proposes an adaptive aggregation mechanism to synchronize and combine the outputs of the encoder and decoder. It also introduces a multiscale 2D separable convolution designed to learn spatial-temporal consistency in two-dimensional space, enhancing the ability of the model to learn spatial-temporal consistency representation. Extensive experiments were conducted on 30 datasets, where the MSTformer outperformed other models with an average accuracy rate of 85.6%. Ablation studies further demonstrate the reliability and stability of MSTformer.
引用
收藏
页数:19
相关论文
共 50 条
  • [1] Spatio-Temporal Consistency for Multivariate Time-Series Representation Learning
    Lee, Sangho
    Kim, Wonjoon
    Son, Youngdoo
    IEEE ACCESS, 2024, 12 : 30962 - 30975
  • [2] Multivariate Time Series Anomaly Detection Based on Spatial-Temporal Network and Transformer in Industrial Internet of Things
    Zhao, Mengmeng
    Peng, Haipeng
    Li, Lixiang
    Ren, Yeqing
    CMC-COMPUTERS MATERIALS & CONTINUA, 2024, 80 (02): : 2815 - 2837
  • [3] Temporal representation learning for time series classification
    Yupeng Hu
    Peng Zhan
    Yang Xu
    Jia Zhao
    Yujun Li
    Xueqing Li
    Neural Computing and Applications, 2021, 33 : 3169 - 3182
  • [4] Temporal representation learning for time series classification
    Hu, Yupeng
    Zhan, Peng
    Xu, Yang
    Zhao, Jia
    Li, Yujun
    Li, Xueqing
    NEURAL COMPUTING & APPLICATIONS, 2021, 33 (08) : 3169 - 3182
  • [5] Time Series Classification via Enhanced Temporal Representation Learning
    Wang, Kun
    Wang, Chun
    Wang, Yunxiao
    Luo, Wei
    Zhan, Peng
    Hu, Yupeng
    Li, Xueqing
    2021 IEEE 6TH INTERNATIONAL CONFERENCE ON BIG DATA ANALYTICS (ICBDA 2021), 2021, : 188 - 192
  • [6] Using a Hidden Markov Model for Improving the Spatial-Temporal Consistency of Time Series Land Cover Classification
    Gong, Wenbing
    Fang, Shenghui
    Yang, Guang
    Ge, Mengyu
    ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION, 2017, 6 (10):
  • [7] A Transformer-based Framework for Multivariate Time Series Representation Learning
    Zerveas, George
    Jayaraman, Srideepika
    Patel, Dhaval
    Bhamidipaty, Anuradha
    Eickhoff, Carsten
    KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 2114 - 2124
  • [8] Emotion Classification Based on Transformer and CNN for EEG Spatial-Temporal Feature Learning
    Yao, Xiuzhen
    Li, Tianwen
    Ding, Peng
    Wang, Fan
    Zhao, Lei
    Gong, Anmin
    Nan, Wenya
    Fu, Yunfa
    BRAIN SCIENCES, 2024, 14 (03)
  • [9] MD-Former: Multiscale Dual Branch Transformer for Multivariate Time Series Classification
    Du, Yanling
    Chu, Shuhao
    Wang, Jintao
    Shi, Manli
    Huang, Dongmei
    Song, Wei
    SENSORS, 2025, 25 (05)
  • [10] Self-Supervised Representation Learning With Spatial-Temporal Consistency for Sign Language Recognition
    Zhao, Weichao
    Zhou, Wengang
    Hu, Hezhen
    Wang, Min
    Li, Houqiang
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 4188 - 4201