Explainable AI methodology for understanding fault detection results during Multi-Mode operations

被引:4
作者
Bhakte, Abhijit [1 ]
Kumawat, Piyush Kumar [1 ]
Srinivasan, Rajagopalan [1 ,2 ]
机构
[1] Indian Inst Technol Madras, Dept Chem Engn, Chennai 600036, India
[2] Indian Inst Technol Madras, Amer Express Lab Data Analyt Risk & Technol, Chennai 600036, India
关键词
Process Monitoring; Deep Learning; Explainable Artificial Intelligence; Multi-mode operations; PRINCIPAL COMPONENT ANALYSIS; MARKOV MODEL; TRANSITIONS; DIAGNOSIS; NETWORK;
D O I
10.1016/j.ces.2024.120493
中图分类号
TQ [化学工业];
学科分类号
0817 ;
摘要
Multi-mode operations are prevalent in the chemical industry. Various methods have been proposed for monitoring multi-mode operations. Of these, AI-based approaches such as Deep Neural Networks (DNN) are becoming popular due to their higher accuracy. However, the lack of transparency of DNNs hinders their widespread adoption in safety-critical applications such as process monitoring. This work addresses this limitation by proposing an Explainable AI (XAI) methodology for multi-mode operations. The proposed methodology encompasses a supervisory system that identifies the current operational mode. This information is used by an Integrated Gradient (IG) based XAI method to configure mode-specific baselines and thus generate DNN explanations corresponding to each operational mode. The ability of this methodology to generate reliable explanations to aid plant operators is illustrated through a simulated CSTR process, the Tennessee-Eastman process, and the pilot-scale Multiphase Flow Facility case study.
引用
收藏
页数:18
相关论文
共 72 条
[1]   Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) [J].
Adadi, Amina ;
Berrada, Mohammed .
IEEE ACCESS, 2018, 6 :52138-52160
[2]   Explainability: Relevance based dynamic deep learning algorithm for fault detection and diagnosis in chemical processes [J].
Agarwal, Piyush ;
Tamer, Melih ;
Budman, Hector .
COMPUTERS & CHEMICAL ENGINEERING, 2021, 154
[3]   Application of interpretable group-embedded graph neural networks for pure compound properties [J].
Aouichaoui, Adem R. N. ;
Fan, Fan ;
Abildskov, Jens ;
Sin, Gurkan .
COMPUTERS & CHEMICAL ENGINEERING, 2023, 176
[4]   An analysis of process fault diagnosis methods from safety perspectives [J].
Arunthavanathan, Rajeevan ;
Khan, Faisal ;
Ahmed, Salim ;
Imtiaz, Syed .
COMPUTERS & CHEMICAL ENGINEERING, 2021, 145
[5]  
Barocas S., FAT ML WORKSHOP SERI
[6]   Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI [J].
Barredo Arrieta, Alejandro ;
Diaz-Rodriguez, Natalia ;
Del Ser, Javier ;
Bennetot, Adrien ;
Tabik, Siham ;
Barbado, Alberto ;
Garcia, Salvador ;
Gil-Lopez, Sergio ;
Molina, Daniel ;
Benjamins, Richard ;
Chatila, Raja ;
Herrera, Francisco .
INFORMATION FUSION, 2020, 58 :82-115
[7]   Alarm-based explanations of process monitoring results from deep neural networks [J].
Bhakte, Abhijit ;
Chakane, Mangesh ;
Srinivasan, Rajagopalan .
COMPUTERS & CHEMICAL ENGINEERING, 2023, 179
[8]   An explainable artificial intelligence based approach for interpretation of fault classification results from deep neural networks [J].
Bhakte, Abhijit ;
Pakkiriswamy, Venkatesh ;
Srinivasan, Rajagopalan .
CHEMICAL ENGINEERING SCIENCE, 2022, 250
[9]   An explainable artificial intelligence approach for unsupervised fault detection and diagnosis in rotating machinery [J].
Brito, Lucas C. ;
Susto, Gian Antonio ;
Brito, Jorge N. ;
Duarte, Marcus A., V .
MECHANICAL SYSTEMS AND SIGNAL PROCESSING, 2022, 163
[10]   Multimodal process monitoring based on variational Bayesian PCA and Kullback-Leibler divergence between mixture models [J].
Cao, Yue ;
Jan, Nabil Magbool ;
Huang, Biao ;
Fang, Mengqi ;
Wang, Yalin ;
Gui, Weihua .
CHEMOMETRICS AND INTELLIGENT LABORATORY SYSTEMS, 2021, 210