Hybridization of model-specific and model-agnostic methods for interpretability of Neural network predictions: Application to a power plant

被引:13
作者
Danesh, Tina [1 ]
Ouaret, Rachid [1 ]
Floquet, Pascal [1 ]
Negny, Stephane [1 ]
机构
[1] Univ Toulouse, Lab Genie Chim, CNRS,INPT,UPS, LGC UMR 5503, 4 Allee Emile Monso, F-31030 Toulouse, France
关键词
Machine learning; Interpretability; Sensitivity analysis; Model-specific; Model-agnostic; Partial dependence plots; Individual conditional expectation; PROCESS FAULT-DETECTION; SENSITIVITY-ANALYSIS; QUANTITATIVE MODEL; APPROXIMATION;
D O I
10.1016/j.compchemeng.2023.108306
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Advanced computing performance and machine learning accuracy have pushed engineers and researchers to consider more and more complex mathematical models. Methods such as Deep Neural Networks have become increasingly ubiquitous. However, the problem of the interpretability of machine learning predictions in a decision process has been identified as a hot topic in several engineering fields, leading to confusion in various communities. This paper discusses a methodological framework of hybrid interpretability tools in neural network prediction for an engineering application. These tools analyze a decision's consequences under different circumstances and situations. The aim is to reconcile the ML prediction accuracy and the interpretability for a global approach to making systems more flexible. In this study, the methods used to deal with the interpretability of neural network predictions have been treated from two perspectives: (i) model -specific as partial derivatives and (ii) model-agnostic methods. The latter tools could be used for any ML model prediction. In order to visualize and explain the inputs' impacts on prediction results, Partial Dependence Plots (PDP), Individual Conditional Expectation (ICE), and Accumulated Local Effects (ALE) are used and compared. The prediction of the electrical power (PE) output of a combined cycle power plant has been chosen to demonstrate the feasibility of these methods under real operating conditions. The results show that the most influential input parameter among ambient temperature (AT), atmospheric pressure (AP)), vacuum (V), and relative humidity (RH) is AT. The visualization outputs allow us to identify the direction (positive or negative) and the form (linear, nonlinear, random, stepwise) of the relationship between the input variables and the model's output. The results of the interpretation are coherent with the literature studies.
引用
收藏
页数:12
相关论文
共 57 条
[1]   Explainability: Relevance based dynamic deep learning algorithm for fault detection and diagnosis in chemical processes [J].
Agarwal, Piyush ;
Tamer, Melih ;
Budman, Hector .
COMPUTERS & CHEMICAL ENGINEERING, 2021, 154
[2]   A THEORY OF ADAPTIVE PATTERN CLASSIFIERS [J].
AMARI, S .
IEEE TRANSACTIONS ON ELECTRONIC COMPUTERS, 1967, EC16 (03) :299-+
[3]  
[Anonymous], 2016, MACH LEARN HEALTHC C
[4]   Visualizing the effects of predictor variables in black box supervised learning models [J].
Apley, Daniel W. ;
Zhu, Jingyu .
JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES B-STATISTICAL METHODOLOGY, 2020, 82 (04) :1059-1086
[5]   Influence of ambient temperature on combined-cycle power-plant performance [J].
Arrieta, FRP ;
Lora, EES .
APPLIED ENERGY, 2005, 80 (03) :261-272
[6]   NeuralNetTools: Visualization and Analysis Tools for Neural Networks [J].
Beck, Marcus W. .
JOURNAL OF STATISTICAL SOFTWARE, 2018, 85 (11)
[7]  
Bequette B.W., 2003, Process control: modeling, design, and simulation
[8]  
Bergmeir C.N., 2012, Neural Networks in R Using the Stuttgart Neural Network Simulator: RSNNS
[9]   An explainable artificial intelligence based approach for interpretation of fault classification results from deep neural networks [J].
Bhakte, Abhijit ;
Pakkiriswamy, Venkatesh ;
Srinivasan, Rajagopalan .
CHEMICAL ENGINEERING SCIENCE, 2022, 250
[10]  
Burkart Nadia, 2019, 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA), P700, DOI 10.1109/ICMLA.2019.00126