An explainable artificial intelligence based approach for interpretation of fault classification results from deep neural networks

被引:30
作者
Bhakte, Abhijit [1 ]
Pakkiriswamy, Venkatesh [1 ]
Srinivasan, Rajagopalan [1 ,2 ]
机构
[1] Indian Inst Technol Madras, Dept Chem Engn, Chennai 600036, India
[2] Indian Inst Technol Madras, Amer Express Lab Data Analyt Risk & Technol, Chennai 600036, India
关键词
Process monitoring; Deep learning; Explainable artificial intelligence; Shapley value; Tennessee Eastman; QUANTITATIVE MODEL; DIAGNOSIS;
D O I
10.1016/j.ces.2021.117373
中图分类号
TQ [化学工业];
学科分类号
0817 ;
摘要
Process monitoring is crucial to ensure operational reliability and to prevent industrial accidents. Data driven methods have become the preferred approach for fault detection and diagnosis. Specifically, deep learning algorithms such as Deep Neural Networks (DNNs) show good potential even in complex processes. A key shortcoming of DNNs is the difficulty in interpreting their classification result. Emerging approaches from explainable Artificial Intelligence (XAI) seek to address this shortcoming. This paper proposes a method based on the Shapley value framework and its implementation using integrated gradients to identify those variables which lead a DNN to classify an input as a fault. The method estimates the marginal contribution of each variable to the DNN, averaged over the path from the baseline (in this case, the process' normal state) to the current sample. We illustrate the resulting variable attribution using a numerical example and the benchmark Tennessee Eastman process. Our results show that the proposed methodology provides accurate, sample-specific explanations of the DNN's prediction. These can be used by the offline model developer to improve the DNN if necessary. It can also be used by the plant operator in real-time to understand the black-box DNN's predictions and decide on operational strategies. (c) 2022 Elsevier Ltd. All rights reserved.
引用
收藏
页数:16
相关论文
共 32 条
[1]  
Abadi M, 2016, PROCEEDINGS OF OSDI'16: 12TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION, P265
[2]  
Agarwal P., 2020, ABS201203861 ARXIV
[3]   Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI [J].
Barredo Arrieta, Alejandro ;
Diaz-Rodriguez, Natalia ;
Del Ser, Javier ;
Bennetot, Adrien ;
Tabik, Siham ;
Barbado, Alberto ;
Garcia, Salvador ;
Gil-Lopez, Sergio ;
Molina, Daniel ;
Benjamins, Richard ;
Chatila, Raja ;
Herrera, Francisco .
INFORMATION FUSION, 2020, 58 :82-115
[4]   A novel process monitoring approach based on variational recurrent autoencoder [J].
Cheng, Feifan ;
He, Q. Peter ;
Zhao, Jinsong .
COMPUTERS & CHEMICAL ENGINEERING, 2019, 129
[5]  
Chiang L.H., 2019, TENNESSEE EASTMAN PR
[6]   Fault diagnosis in chemical processes using Fisher discriminant analysis, discriminant partial least squares, and principal component analysis [J].
Chiang, LH ;
Russell, EL ;
Braatz, RD .
CHEMOMETRICS AND INTELLIGENT LABORATORY SYSTEMS, 2000, 50 (02) :243-252
[7]  
Dosilovic FK, 2018, 2018 41ST INTERNATIONAL CONVENTION ON INFORMATION AND COMMUNICATION TECHNOLOGY, ELECTRONICS AND MICROELECTRONICS (MIPRO), P210, DOI 10.23919/MIPRO.2018.8400040
[8]   A PLANT-WIDE INDUSTRIAL-PROCESS CONTROL PROBLEM [J].
DOWNS, JJ ;
VOGEL, EF .
COMPUTERS & CHEMICAL ENGINEERING, 1993, 17 (03) :245-255
[9]  
Google LLC, 2019, EXPL WHIT 2019
[10]   Machine Learning and Deep Learning in Chemical Health and Safety: A Systematic Review of Techniques and Applications [J].
Jiao, Zeren ;
Hu, Pingfan ;
Xu, Hongfei ;
Wang, Qingsheng .
ACS CHEMICAL HEALTH & SAFETY, 2020, 27 (06) :316-334