Look into the LITE in deep learning for time series classification

被引:1
作者
Ismail-Fawaz, Ali [1 ]
Devanne, Maxime [1 ]
Berretti, Stefano [2 ]
Weber, Jonathan [1 ]
Forestier, Germain [1 ,3 ]
机构
[1] Univ Haute Alsace, IRIMAS, Mulhouse, France
[2] Univ Florence, MICC, Florence, Italy
[3] Monash Univ, DSAI, Melbourne, Australia
关键词
Time series classification; Deep learning; Convolutional neural networks; DepthWise separable convolutions;
D O I
10.1007/s41060-024-00708-5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning models have been shown to be a powerful solution for Time Series Classification (TSC). State-of-the-art architectures, while producing promising results on the UCR and the UEA archives, present a high number of trainable parameters. This can lead to long training with high CO2 emission, power consumption and possible increase in the number of FLoating-point Operation Per Second (FLOPS). In this paper, we present a new architecture for TSC, the Light Inception with boosTing tEchnique (LITE) with only 2.34%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2.34\%$$\end{document} of the number of parameters of the state-of-the-art InceptionTime model, while preserving performance. This architecture, with only 9, 814 trainable parameters due to the usage of DepthWise Separable Convolutions (DWSC), is boosted by three techniques: multiplexing, custom filters, and dilated convolution. The LITE architecture, trained on the UCR, is 2.78 times faster than InceptionTime and consumes 2.79 times less CO2 and power, while achieving an average accuracy of 84.62%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$84.62\%$$\end{document} compared to 84.91%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$84.91\%$$\end{document} with InceptionTime. To evaluate the performance of the proposed architecture on multivariate time series data, we adapt LITE to handle multivariate time series, we call this version LITEMV. To bring theory into application, we also conducted experiments using LITEMV on multivariate time series representing human rehabilitation movements, showing that LITEMV not only is the most efficient model but also the best performing for this application on the Kimore dataset, a skeleton-based human rehabilitation exercises dataset. Moreover, to address the interpretability of LITEMV, we present a study using Class Activation Maps to understand the classification decision taken by the model during evaluation.
引用
收藏
页数:21
相关论文
共 62 条
[1]   A study of Knowledge Distillation in Fully Convolutional Network for Time Series Classification [J].
Ay, Emel ;
Devanne, Maxime ;
Weber, Jonathan ;
Forestier, Germain .
2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
[2]  
Bagnall A, 2018, Arxiv, DOI arXiv:1811.00075
[3]   The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances [J].
Bagnall, Anthony ;
Lines, Jason ;
Bostrom, Aaron ;
Large, James ;
Keogh, Eamonn .
DATA MINING AND KNOWLEDGE DISCOVERY, 2017, 31 (03) :606-660
[4]  
Bai SJ, 2018, Arxiv, DOI [arXiv:1803.01271, DOI 10.48550/ARXIV.1803.01271, 10.48550/arXiv.1803.01271]
[5]  
Benavoli A, 2016, J MACH LEARN RES, V17
[6]  
Brown TB, 2020, ADV NEUR IN, V33
[7]   The KIMORE Dataset: KInematic Assessment of MOvement and Clinical Scores for Remote Monitoring of Physical REhabilitation [J].
Capecci, Marianna ;
Ceravolo, Maria Gabriella ;
Ferracuti, Francesco ;
Iarlori, Sabrina ;
Monteriu, Andrea ;
Romeo, Luca ;
Verdini, Federica .
IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, 2019, 27 (07) :1436-1448
[8]  
Cui ZC, 2016, Arxiv, DOI arXiv:1603.06995
[9]   MINIROCKET A Very Fast (Almost) Deterministic Transform for Time Series Classification [J].
Dempster, Angus ;
Schmidt, Daniel F. ;
Webb, Geoffrey, I .
KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, :248-257
[10]   ROCKET: exceptionally fast and accurate time series classification using random convolutional kernels [J].
Dempster, Angus ;
Petitjean, Francois ;
Webb, Geoffrey, I .
DATA MINING AND KNOWLEDGE DISCOVERY, 2020, 34 (05) :1454-1495