Evaluating eXplainable artificial intelligence tools for hard disk drive predictive maintenance

被引:19
作者
Ferraro, Antonino [1 ]
Galli, Antonio [1 ]
Moscato, Vincenzo [1 ,2 ]
Sperli, Giancarlo [1 ,2 ]
机构
[1] Univ Naples Federico II, Dept Elect Engn & Informat Technol DIETI, Via Claudio 21, I-80125 Naples, Italy
[2] Complesso Univ Monte Sant Angelo, CINI ITEM Natl Lab, I-80145 Naples, Italy
关键词
EXplainable artificial intelligence; Predictive maintenance; LSTM-based model; Deep learning; FAULT-DIAGNOSIS; HEALTH-CARE; INDUSTRY;
D O I
10.1007/s10462-022-10354-7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the last years, one of the main challenges in Industry 4.0 concerns maintenance operations optimization, which has been widely dealt with several predictive maintenance frameworks aiming to jointly reduce maintenance costs and downtime intervals. Nevertheless, the most recent and effective frameworks mainly rely on deep learning models, but their internal representations (black box) are too complex for human understanding making difficult explain their predictions. This issue can be challenged by using eXplainable artificial intelligence (XAI) methodologies, the aim of which is to explain the decisions of data-driven AI models, characterizing the strengths and weaknesses of the decision-making process by making results more understandable by humans. In this paper, we focus on explanation of the predictions made by a recurrent neural networks based model, which requires a tree-dimensional dataset because it exploits spatial and temporal features for estimating remaining useful life (RUL) of hard disk drives (HDDs). In particular, we have analyzed in depth as explanations about RUL prediction provided by different XAI tools, compared using different metrics and showing the generated dashboards, can be really useful for supporting predictive maintenance task by means of both global and local explanations. For this aim, we have realized an explanation framework able to investigate local interpretable model-agnostic explanations (LIME) and SHapley Additive exPlanations (SHAP) tools w.r.t. to the Backblaze Dataset and a long short-term memory (LSTM) prediction model. The achieved results show how SHAP outperforms LIME in almost all the considered metrics, resulting a suitable and effective solution for HDD predictive maintenance applications.
引用
收藏
页码:7279 / 7314
页数:36
相关论文
共 73 条
[21]  
Gao ZW, 2015, IEEE T IND ELECTRON, V62, P3768, DOI [10.1109/TIE.2015.2417501, 10.1109/TIE.2015.2419013]
[22]  
Genova M, 2018, UNEEN13306
[23]   Peeking Inside the Black Box: Visualizing Statistical Learning With Plots of Individual Conditional Expectation [J].
Goldstein, Alex ;
Kapelner, Adam ;
Bleich, Justin ;
Pitkin, Emil .
JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS, 2015, 24 (01) :44-65
[24]   European Union Regulations on Algorithmic Decision Making and a "Right to Explanation" [J].
Goodman, Bryce ;
Flaxman, Seth .
AI MAGAZINE, 2017, 38 (03) :50-57
[25]   A Survey of Methods for Explaining Black Box Models [J].
Guidotti, Riccardo ;
Monreale, Anna ;
Ruggieri, Salvatore ;
Turin, Franco ;
Giannotti, Fosca ;
Pedreschi, Dino .
ACM COMPUTING SURVEYS, 2019, 51 (05)
[26]   An Empirical Evaluation of AI Deep Explainable Tools [J].
Hailemariam, Yoseph ;
Yazdinejad, Abbas ;
Parizi, Reza M. ;
Srivastava, Gautam ;
Dehghantanha, Ali .
2020 IEEE GLOBECOM WORKSHOPS (GC WKSHPS), 2020,
[27]  
Hara S, 2018, PR MACH LEARN RES, V84
[28]  
Honegger M., 2018, ARXIV
[29]   A Novel Local Motion Planning Framework for Autonomous Vehicles Based on Resistance Network and Model Predictive Control [J].
Huang, Yanjun ;
Wang, Hong ;
Khajepour, Amir ;
Ding, Haitao ;
Yuan, Kang ;
Qin, Yechen .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (01) :55-66
[30]   BASIC INSTINCTS [J].
Hutson, Matthew .
SCIENCE, 2018, 360 (6391) :845-847