Agnostic local explanation for time series classification

被引:31
作者
Guilleme, Mael [1 ]
Masson, Veronique [2 ]
Roze, Laurence [3 ]
Termier, Alexandre [2 ]
机构
[1] Univ Rennes, Energiency, Inria, CNRS,IRISA, Rennes, France
[2] Univ Rennes, IRISA, Inria, CNRS, Rennes, France
[3] Univ Rennes, Insa, Inria, CNRS,IRISA, Rennes, France
来源
2019 IEEE 31ST INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2019) | 2019年
关键词
Interpretability; Time series classification; Local explanations;
D O I
10.1109/ICTAI.2019.00067
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent advances in Machine Learning (such as Deep Learning) have brought tremendous gains in classification accuracy. However, these approaches build complex non-linear models, making the resulting predictions difficult to interpret for humans. The field of model interpretability has therefore recently emerged, aiming to address this issue by designing methods to explain a posteriori the predictions of complex learners. Interpretability frameworks such as LIME and SHAP have been proposed for tabular, image and text data. Nowadays, with the advent of the Internet of Things and of pervasive monitoring, time-series have become ubiquitous and their classification is a crucial task in many application domains. Like in other data domains, state-of-the-art time-series classifiers rely on complex models and typically do not provide intuitive and easily interpretable outputs, yet no interpretability framework had so far been proposed for this type of data. In this paper, we propose the first agnostic Local Explainer For Time Series classificaTion (LEFTIST). LEFTIST provides explanations for predictions made by any time series classifier. Our thorough experiments on synthetic and real-world datasets show that the explanations provided by LEFTIST are at once faithful to the classification model and understandable by human users.
引用
收藏
页码:432 / 439
页数:8
相关论文
共 22 条
[1]   Reverse Engineering the Neural Networks for Rule Extraction in Classification Problems [J].
Augasta, M. Gethsiyal ;
Kathirvalavakumar, T. .
NEURAL PROCESSING LETTERS, 2012, 35 (02) :131-150
[2]   The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances [J].
Bagnall, Anthony ;
Lines, Jason ;
Bostrom, Aaron ;
Large, James ;
Keogh, Eamonn .
DATA MINING AND KNOWLEDGE DISCOVERY, 2017, 31 (03) :606-660
[3]   Time-Series Classification with COTE: The Collective of Transformation-Based Ensembles [J].
Bagnall, Anthony ;
Lines, Jason ;
Hills, Jon ;
Bostrom, Aaron .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2015, 27 (09) :2522-2535
[4]  
Dau HA, 2018, The ucr time series classification archive
[5]   Least angle regression - Rejoinder [J].
Efron, B ;
Hastie, T ;
Johnstone, I ;
Tibshirani, R .
ANNALS OF STATISTICS, 2004, 32 (02) :494-499
[6]   Adversarial Attacks on Deep Neural Networks for Time Series Classification [J].
Fawaz, Hassan Ismail ;
Forestier, Germain ;
Weber, Jonathan ;
Idoumghar, Lhassane ;
Muller, Pierre-Alain .
2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
[7]   The Need for Interpretability Biases [J].
Fuernkranz, Johannes ;
Kliegr, Tomas .
ADVANCES IN INTELLIGENT DATA ANALYSIS XVII, IDA 2018, 2018, 11191 :15-27
[8]   Learning Time-Series Shapelets [J].
Grabocka, Josif ;
Schilling, Nicolas ;
Wistuba, Martin ;
Schmidt-Thieme, Lars .
PROCEEDINGS OF THE 20TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING (KDD'14), 2014, :392-401
[9]   A Survey of Methods for Explaining Black Box Models [J].
Guidotti, Riccardo ;
Monreale, Anna ;
Ruggieri, Salvatore ;
Turin, Franco ;
Giannotti, Fosca ;
Pedreschi, Dino .
ACM COMPUTING SURVEYS, 2019, 51 (05)
[10]  
Lakkaraju H., 2016, KDD 16, V16