ShapeFormer: Shapelet Transformer for Multivariate Time Series Classification

被引:5
作者
Le, Xuan-May [1 ]
Luo, Ling [1 ]
Aickelin, Uwe [1 ]
Tran, Minh-Tuan [2 ]
机构
[1] Univ Melbourne, Melbourne, Vic, Australia
[2] Monash Univ, Melbourne, Vic, Australia
来源
PROCEEDINGS OF THE 30TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2024 | 2024年
关键词
time series; shapelet; transformer; attention; classification;
D O I
10.1145/3637528.3671862
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multivariate time series classification (MTSC) has attracted significant research attention due to its diverse real-world applications. Recently, exploiting transformers for MTSC has achieved state-of-the-art performance. However, existing methods focus on generic features, providing a comprehensive understanding of data, but they ignore class-specific features crucial for learning the representative characteristics of each class. This leads to poor performance in the case of imbalanced datasets or datasets with similar overall patterns but differing in minor class-specific details. In this paper, we propose a novel Shapelet Transformer (ShapeFormer), which comprises class-specific and generic transformer modules to capture both of these features. In the class-specific module, we introduce the discovery method to extract the discriminative subsequences of each class (i.e. shapelets) from the training set. We then propose a Shapelet Filter to learn the difference features between these shapelets and the input time series. We found that the difference feature for each shapelet contains important class-specific features, as it shows a significant distinction between its class and others. In the generic module, convolution filters are used to extract generic features that contain information to distinguish among all classes. For each module, we employ the transformer encoder to capture the correlation between their features. As a result, the combination of two transformer modules allows our model to exploit the power of both types of features, thereby enhancing the classification performance. Our experiments on 30 UEA MTSC datasets demonstrate that ShapeFormer has achieved the highest accuracy ranking compared to state-of-the-art methods. The code is available at https://github.com/xuanmay2701/shapeformer.
引用
收藏
页码:1484 / 1494
页数:11
相关论文
共 50 条
[1]  
[Anonymous], NEURAL NETWORKS
[2]  
[Anonymous], 2017, ICMLC P 9 INT C MACH
[3]  
Bagnall Anthony, 2018, arXiv
[4]  
Batista G.E. A. P. A., 2011, SDM, P699, DOI [DOI 10.1137/1.9781611972818.60, 10.1137/1.9781611972818.60]
[5]  
Berndt D.J., 1994, KDD Workshop, P359, DOI DOI 10.5555/3000850.3000887
[6]  
Chung Fu-Lai., 2001, International Joint Conference on Artificial Intelligence Workshop on Learning from Temporal and Spatial Data, P1
[7]   MINIROCKET A Very Fast (Almost) Deterministic Transform for Time Series Classification [J].
Dempster, Angus ;
Schmidt, Daniel F. ;
Webb, Geoffrey, I .
KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, :248-257
[8]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[9]  
Do Thuc-Doan, 2017, P INT C COMP DAT AN, P240
[10]  
Dosovitskiy A., 2020, arXiv, V2010, P11929, DOI [10.48550/arXiv.2010.11929Focustolearnmore, DOI 10.48550/ARXIV.2010.11929FOCUSTOLEARNMORE]