A hybrid lightweight transformer architecture based on fuzzy attention prototypes for multivariate time series classification

被引:0
|
作者
Gu, Yan [1 ,2 ]
Jin, Feng [1 ,2 ]
Zhao, Jun [1 ,2 ]
Wang, Wei [1 ,2 ]
机构
[1] Dalian Univ Technol, Key Lab Intelligent Control & Optimizat Ind Equipm, Minist Educ, Dalian 116024, Peoples R China
[2] Dalian Univ Technol, Sch Control Sci & Engn, Dalian 116024, Peoples R China
关键词
Multivariate time series classification; Data uncertainty; Fuzzy attention; Prototype learning; NETWORK;
D O I
10.1016/j.ins.2025.121942
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multivariate time series classification has become a research hotspot owing to its rapid development. Existing methods mainly focus on the feature correlations of time series, ignoring data uncertainty and sample sparsity. To address these challenges, a hybrid lightweight Transformer architecture based on fuzzy attention prototypes named FapFormer is proposed, in which a convolutional spanning Vision Transformer module is built to perform feature extraction and provide inductive bias, incorporating dynamic feature sampling to select the key features adaptively for increasing the training efficiency. A progressive branching convolution (PBC) block and convolutional self-attention (CSA) block are then introduced to extract both local and global features. Furthermore, a feature complementation strategy is implemented to enable the CSA block to specialize in global dependencies, overcoming the local receptive field limitations of the PBC block. Finally, a novel fuzzy attention prototype learning method is proposed to represent class prototypes for data uncertainty, which employs the distances between prototypes and low- dimensional embeddings for classification. Experiments were conducted using both the UEA benchmark dataset and a practical industrial dataset demonstrate that FapFormer outperforms several state-of-the-art methods, achieving improved accuracy and reduced computational complexity, even under conditions of data uncertainty and sample sparsity.
引用
收藏
页数:19
相关论文
共 50 条
  • [41] Drunk driving detection based on classification of multivariate time series
    Li, Zhenlong
    Jin, Xue
    Zhao, Xiaohua
    JOURNAL OF SAFETY RESEARCH, 2015, 54 : 61 - 67
  • [42] Stacking for multivariate time series classification
    Oscar J. Prieto
    Carlos J. Alonso-González
    Juan J. Rodríguez
    Pattern Analysis and Applications, 2015, 18 : 297 - 312
  • [43] Early classification on multivariate time series
    He, Guoliang
    Duan, Yong
    Peng, Rong
    Jing, Xiaoyuan
    Qian, Tieyun
    Wang, Lingling
    NEUROCOMPUTING, 2015, 149 : 777 - 787
  • [44] Stacking for multivariate time series classification
    Prieto, Oscar J.
    Alonso-Gonzalez, Carlos J.
    Rodriguez, Juan J.
    PATTERN ANALYSIS AND APPLICATIONS, 2015, 18 (02) : 297 - 312
  • [45] Multivariate Time Series Early Classification with Interpretability Using Deep Learning and Attention Mechanism
    Hsu, En-Yu
    Liu, Chien-Liang
    Tseng, Vincent S.
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2019, PT III, 2019, 11441 : 541 - 553
  • [46] DA-Net: Dual-attention network for multivariate time series classification
    Chen, Rongjun
    Yan, Xuanhui
    Wang, Shiping
    Xiao, Guobao
    INFORMATION SCIENCES, 2022, 610 : 472 - 487
  • [47] Multiscale echo self-attention memory network for multivariate time series classification
    Lyu, Huizi
    Huang, Desen
    Li, Sen
    Ma, Qianli
    Ng, Wing W. Y.
    NEUROCOMPUTING, 2023, 520 : 60 - 72
  • [48] Partial Correlation-based Attention for Multivariate Time Series Forecasting
    Lee, Won Kyung
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 13720 - 13721
  • [49] MR-Transformer: Multiresolution Transformer for Multivariate Time Series Prediction
    Zhu, Siying
    Zheng, Jiawei
    Ma, Qianli
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025, 36 (01) : 1171 - 1183
  • [50] MR-Transformer: Multiresolution Transformer for Multivariate Time Series Prediction
    Zhu, Siying
    Zheng, Jiawei
    Ma, Qianli
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025, 36 (01) : 1171 - 1183