Proximity Forest: an effective and scalable distance-based classifier for time series

被引:127
作者
Lucas, Benjamin [1 ]
Shifaz, Ahmed [1 ]
Pelletier, Charlotte [1 ]
O'Neill, Lachlan [1 ]
Zaidi, Nayyar [1 ]
Goethals, Bart [1 ]
Petitjean, Francois [1 ]
Webb, Geoffrey I. [1 ]
机构
[1] Monash Univ, Fac Informat Technol, 25 Exhibit Walk, Melbourne, Vic 3800, Australia
基金
澳大利亚研究理事会;
关键词
Time series classification; Scalable classification; Time-warp similarity measures; Ensemble;
D O I
10.1007/s10618-019-00617-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Research into the classification of time series has made enormous progress in the last decade. The UCR time series archive has played a significant role in challenging and guiding the development of new learners for time series classification. The largest dataset in the UCR archive holds 10,000 time series only; which may explain why the primary research focus has been on creating algorithms that have high accuracy on relatively small datasets. This paper introduces Proximity Forest, an algorithm that learns accurate models from datasets with millions of time series, and classifies a time series in milliseconds. The models are ensembles of highly randomized Proximity Trees. Whereas conventional decision trees branch on attribute values (and usually perform poorly on time series), Proximity Trees branch on the proximity of time series to one exemplar time series or another; allowing us to leverage the decades of work into developing relevant measures for time series. Proximity Forest gains both efficiency and accuracy by stochastic selection of both exemplars and similarity measures. Our work is motivated by recent time series applications that provide orders of magnitude more time series than the UCR benchmarks. Our experiments demonstrate that Proximity Forest is highly competitive on the UCR archive: it ranks among the most accurate classifiers while being significantly faster. We demonstrate on a 1M time series Earth observation dataset that Proximity Forest retains this accuracy on datasets that are many orders of magnitude greater than those in the UCR repository, while learning its models at least 100,000 times faster than current state-of-the-art models Elastic Ensemble and COTE.
引用
收藏
页码:607 / 635
页数:29
相关论文
共 51 条
[1]  
[Anonymous], 2006, P 32 INT C VERY LARG
[2]  
[Anonymous], 2005, 2005 ACM SIGMOD INT
[3]  
[Anonymous], SIGSPATIAL
[4]  
[Anonymous], 1971, ICA
[5]   The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances [J].
Bagnall, Anthony ;
Lines, Jason ;
Bostrom, Aaron ;
Large, James ;
Keogh, Eamonn .
DATA MINING AND KNOWLEDGE DISCOVERY, 2017, 31 (03) :606-660
[6]   Time-Series Classification with COTE: The Collective of Transformation-Based Ensembles [J].
Bagnall, Anthony ;
Lines, Jason ;
Hills, Jon ;
Bostrom, Aaron .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2015, 27 (09) :2522-2535
[7]  
Balakrishnan S, 2006, IEEE DATA MINING, P798
[8]  
Bernhardsson E, 2013, INDEXING WITH ANNOY
[9]   Random forests [J].
Breiman, L .
MACHINE LEARNING, 2001, 45 (01) :5-32
[10]  
Chen L, 2004, Proceedings of the Thirtieth International Conference on Very Large Data Bases - Volume 30. VLDB'04, P792, DOI [DOI 10.1016/B978-012088469-8.50070-X, 10.1016/B978-012088469-8.50070-X]