Look into the LITE in deep learning for time series classification

被引:0
|
作者
Ismail-Fawaz, Ali [1 ]
Devanne, Maxime [1 ]
Berretti, Stefano [2 ]
Weber, Jonathan [1 ]
Forestier, Germain [1 ,3 ]
机构
[1] Univ Haute Alsace, IRIMAS, Mulhouse, France
[2] Univ Florence, MICC, Florence, Italy
[3] Monash Univ, DSAI, Melbourne, Australia
关键词
Time series classification; Deep learning; Convolutional neural networks; DepthWise separable convolutions;
D O I
10.1007/s41060-024-00708-5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning models have been shown to be a powerful solution for Time Series Classification (TSC). State-of-the-art architectures, while producing promising results on the UCR and the UEA archives, present a high number of trainable parameters. This can lead to long training with high CO2 emission, power consumption and possible increase in the number of FLoating-point Operation Per Second (FLOPS). In this paper, we present a new architecture for TSC, the Light Inception with boosTing tEchnique (LITE) with only 2.34%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2.34\%$$\end{document} of the number of parameters of the state-of-the-art InceptionTime model, while preserving performance. This architecture, with only 9, 814 trainable parameters due to the usage of DepthWise Separable Convolutions (DWSC), is boosted by three techniques: multiplexing, custom filters, and dilated convolution. The LITE architecture, trained on the UCR, is 2.78 times faster than InceptionTime and consumes 2.79 times less CO2 and power, while achieving an average accuracy of 84.62%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$84.62\%$$\end{document} compared to 84.91%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$84.91\%$$\end{document} with InceptionTime. To evaluate the performance of the proposed architecture on multivariate time series data, we adapt LITE to handle multivariate time series, we call this version LITEMV. To bring theory into application, we also conducted experiments using LITEMV on multivariate time series representing human rehabilitation movements, showing that LITEMV not only is the most efficient model but also the best performing for this application on the Kimore dataset, a skeleton-based human rehabilitation exercises dataset. Moreover, to address the interpretability of LITEMV, we present a study using Class Activation Maps to understand the classification decision taken by the model during evaluation.
引用
收藏
页数:21
相关论文
共 50 条
  • [41] Periodic Time Series Data Analysis by Deep Learning Methodology
    Zhang, Haolong
    Lu, Haoye
    Nayak, Amiya
    IEEE ACCESS, 2020, 8 : 223078 - 223088
  • [42] Real-Time Traffic Classification through Deep Learning
    Priymak, Maxim
    Sinnott, Richard O.
    8TH IEEE/ACM INTERNATIONAL CONFERENCE ON BIG DATA COMPUTING, APPLICATIONS AND TECHNOLOGIES, BDCAT 2021, 2021, : 128 - 133
  • [43] Online abnormal interval detection and classification of industrial time series data based on multi-scale deep learning
    Zhou, Yujie
    Xu, Ke
    He, Fei
    Zhang, Zhiyan
    JOURNAL OF THE TAIWAN INSTITUTE OF CHEMICAL ENGINEERS, 2022, 138
  • [44] An end-to-end harmful object identification method for sizer crusher based on time series classification and deep learning
    Bi, Yankun
    Pan, Yongtai
    Yu, Chao
    Wang, Mengchao
    Cui, Tongyu
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2023, 120
  • [45] Discriminative Dictionary Learning for Time Series Classification
    Zhang, Wei
    Wang, Zhihai
    Yuan, Jidong
    Hao, Shilei
    IEEE ACCESS, 2020, 8 : 185032 - 185044
  • [46] Temporal representation learning for time series classification
    Yupeng Hu
    Peng Zhan
    Yang Xu
    Jia Zhao
    Yujun Li
    Xueqing Li
    Neural Computing and Applications, 2021, 33 : 3169 - 3182
  • [47] Characteristic Subspace Learning for Time Series Classification
    He, Yuanduo
    Pei, Jialiang
    Chu, Xu
    Wang, Yasha
    Jin, Zhu
    Peng, Guangju
    2018 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2018, : 1019 - 1024
  • [48] Temporal representation learning for time series classification
    Hu, Yupeng
    Zhan, Peng
    Xu, Yang
    Zhao, Jia
    Li, Yujun
    Li, Xueqing
    NEURAL COMPUTING & APPLICATIONS, 2021, 33 (08) : 3169 - 3182
  • [49] Classification of Multi-Parametric Body MRI Series Using Deep Learning
    Kim, Boah
    Mathai, Tejas Sudharshan
    Helm, Kimberly
    Pinto, Peter A.
    Summers, Ronald M.
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2024, 28 (11) : 6791 - 6802
  • [50] A Novel Embedded Discretization-Based Deep Learning Architecture for Multivariate Time Series Classification
    Tahan, Marzieh Hajizadeh
    Ghasemzadeh, Mohammad
    Asadi, Shahrokh
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2023, 19 (04) : 5976 - 5984