A new approach based on association rules to add explainability to time series forecasting models

被引:26
作者
Troncoso-Garcia, A. R. [1 ]
Martinez-Ballesteros, M. [2 ]
Martinez-Alvarez, F. [1 ]
Troncoso, A. [1 ]
机构
[1] Univ Pablo de Olavide, Data Sci & Big Data Lab, ES-41013 Seville, Spain
[2] Univ Seville, Dept Comp Sci, ES-41012 Seville, Spain
关键词
Explainable AI; Machine learning; Time series forecasting; Interpretability; Association rules; BLACK-BOX; INTERPRETABILITY; FRAMEWORK; ALGORITHM;
D O I
10.1016/j.inffus.2023.01.021
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Machine learning and deep learning have become the most useful and powerful tools in the last years to mine information from large datasets. Despite the successful application to many research fields, it is widely known that some of these solutions based on artificial intelligence are considered black -box models, meaning that most experts find difficult to explain and interpret the models and why they generate such outputs. In this context, explainable artificial intelligence is emerging with the aim of providing black -box models with sufficient interpretability. Thus, models could be easily understood and further applied. This work proposes a novel method to explain black -box models, by using numeric association rules to explain and interpret multi -step time series forecasting models. Thus, a multi -objective algorithm is used to discover quantitative association rules from the target model. Then, visual explanation techniques are applied to make the rules more interpretable. Data from Spanish electricity energy consumption has been used to assess the suitability of the proposal.
引用
收藏
页码:169 / 180
页数:12
相关论文
共 50 条
[1]  
Abanda A., 2021, Ph.D. thesis
[2]   Verifying domain knowledge and theories on Fire-induced spalling of concrete through eXplainable artificial intelligence [J].
al-Bashiti, Mohammad Khaled ;
Naser, M. Z. .
CONSTRUCTION AND BUILDING MATERIALS, 2022, 348
[3]  
Anguita-Ruiz A, 2020, PLOS COMPUT BIOL, V16, DOI [10.1371/journal.pcbi.1007792, 10.1371/journal.pcbi.1007792.r001, 10.1371/journal.pcbi.1007792.r002, 10.1371/journal.pcbi.1007792.r003, 10.1371/journal.pcbi.1007792.r004]
[4]   CLEVR-XAI: A benchmark dataset for the ground truth evaluation of neural network explanations [J].
Arras, Leila ;
Osman, Ahmed ;
Samek, Wojciech .
INFORMATION FUSION, 2022, 81 :14-40
[5]  
Arya V, 2019, Arxiv, DOI arXiv:1909.03012
[6]   Interpretable deep learning to map diagnostic texts to ICD-10 codes [J].
Atutxa, Aitziber ;
Diaz de Ilarraza, Arantza ;
Gojenola, Koldo ;
Oronoz, Maite ;
Perez-de-Vinaspre, Olatz .
INTERNATIONAL JOURNAL OF MEDICAL INFORMATICS, 2019, 129 :49-59
[7]   On the post-hoc explainability of deep echo state networks for time series forecasting, image and video classification [J].
Barredo Arrieta, Alejandro ;
Gil-Lopez, Sergio ;
Lana, Ibai ;
Bilbao, Miren Nekane ;
Del Ser, Javier .
NEURAL COMPUTING & APPLICATIONS, 2022, 34 (13) :10257-10277
[8]   Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI [J].
Barredo Arrieta, Alejandro ;
Diaz-Rodriguez, Natalia ;
Del Ser, Javier ;
Bennetot, Adrien ;
Tabik, Siham ;
Barbado, Alberto ;
Garcia, Salvador ;
Gil-Lopez, Sergio ;
Molina, Daniel ;
Benjamins, Richard ;
Chatila, Raja ;
Herrera, Francisco .
INFORMATION FUSION, 2020, 58 :82-115
[9]  
Bokde N, 2017, R J, V9, P324
[10]  
Brownlee Jason, 2018, Machine Learning Mastery