Invariant subspace learning for time series data based on dynamic time warping distance

被引:20
作者
Deng, Huiqi [1 ,2 ]
Chen, Weifu [1 ]
Shen, Qi [4 ]
Ma, Andy J. [3 ]
Yuen, Pong C. [2 ]
Feng, Guocan [1 ]
机构
[1] Sun Yat Sen Univ, Sch Math, Guangzhou, Peoples R China
[2] Hong Kong Baptist Univ, Dept Comp Sci, Hong Kong, Peoples R China
[3] Sun Yat Sen Univ, Sch Data & Comp Sci, Guangzhou, Peoples R China
[4] Icahn Sch Med Mt Sinai, Dept Genet & Genom Sci, New York, NY 10029 USA
关键词
Invariant subspace learning; Dynamic time warping (DTW); Time series; Dictionary learning; KERNEL; REPRESENTATION; CLASSIFICATION; SPARSE;
D O I
10.1016/j.patcog.2020.107210
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Low-dimensional and compact representation of time series data is of importance for mining and storage. In practice, time series data are vulnerable to various temporal transformations, such as shift and temporal scaling, however, which are unavoidable in the process of data collection. If a learning algorithm directly calculates the difference between such transformed data based on Euclidean distance, the measurement cannot faithfully reflect the similarity and hence could not learn the underlying discriminative features. In order to solve this problem, we develop a novel subspace learning algorithm based on dynamic time warping (DTW) distance which is an elastic distance defined in a DTW space. The algorithm aims to minimize the reconstruction error in the DTW space. However, since DTW space is a semi-pseudo metric space, it is difficult to generalize common subspace learning algorithms for such semi-pseudo metric spaces. In this work, we introduce warp operators with which DTW reconstruction error can be approximated by reconstruction error between transformed series and their reconstructions in a subspace. The warp operators align time series data with their linear representations in the DTW space, which is in particular important for misaligned time series, so that the subspace can be learned to obtain an intrinsic basis (dictionary) for the representation of the data. The warp operators and the subspace are optimized alternatively until reaching equilibrium. Experiments results show that the proposed algorithm outperforms traditional subspace learning algorithms and temporal transform-invariance based methods (including SIDL, Kernel PCA, and SPMC et. al), and obtains competitive results with the state-of-the-art algorithms, such as BOSS algorithm. (C) 2020 Elsevier Ltd. All rights reserved.
引用
收藏
页数:13
相关论文
共 56 条
  • [1] Alexander Zien, 2006, IEEE T NEURAL NETWOR
  • [2] [Anonymous], CORR
  • [3] [Anonymous], P 28 INT JOINT C ART
  • [4] [Anonymous], 2009, P ADV NEUR INF PROC
  • [5] Belkin M, 2006, J MACH LEARN RES, V7, P2399
  • [6] Representation Learning: A Review and New Perspectives
    Bengio, Yoshua
    Courville, Aaron
    Vincent, Pascal
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (08) : 1798 - 1828
  • [7] Efficient time series matching by wavelets
    Chan, KP
    Fu, AWC
    [J]. 15TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING, PROCEEDINGS, 1999, : 126 - 133
  • [8] Chen Y., 2015, The UCR Time Series Classification Archive
  • [9] Kernel sparse representation for time series classification
    Chen, Zhihua
    Zuo, Wangmeng
    Hu, Qinghua
    Lin, Liang
    [J]. INFORMATION SCIENCES, 2015, 292 : 15 - 26
  • [10] Cuturi M, 2007, INT CONF ACOUST SPEE, P413