Augmenting post-hoc explanations for predictive process monitoring with uncertainty quantification via conformalized Monte Carlo dropout

被引:0
作者
Mehdiyev, Nijat [1 ]
Majlatow, Maxim
Fettke, Peter
机构
[1] German Res Ctr Artificial Intelligence DFKI, D-66123 Saarbrucken, Germany
关键词
Explainable artificial intelligence; Uncertainty quantification; Conformal prediction; Predictive process monitoring; ABSOLUTE ERROR MAE; REGRESSION; BEHAVIOR; MACHINE; FRAMEWORK; RMSE;
D O I
10.1016/j.datak.2024.102402
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This study presents a novel approach to improve the transparency and reliability of deep learning models in predictive process monitoring (PPM) by integrating uncertainty quantification (UQ) and explainable artificial intelligence (XAI) techniques. We introduce the conformalized Monte Carlo dropout method, which combines Monte Carlo dropout for uncertainty estimation with conformal prediction (CP) to generate reliable prediction intervals. Additionally, we enhance post-hoc explanation techniques such as individual conditional expectation (ICE) plots and partial dependence plots (PDP) with uncertainty information, including credible and conformal predictive intervals. Our empirical evaluation in the manufacturing industry demonstrates the effectiveness of these approaches in refining strategic and operational decisions. This research contributes to advancing PPM and machine learning by bridging the gap between model transparency and high-stakes decision-making scenarios.
引用
收藏
页数:29
相关论文
共 85 条
  • [1] Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
    Adadi, Amina
    Berrada, Mohammed
    [J]. IEEE ACCESS, 2018, 6 : 52138 - 52160
  • [2] Performance Comparison of Support Vector Machine, Random Forest, and Extreme Learning Machine for Intrusion Detection
    Ahmad, Iftikhar
    Basheri, Mohammad
    Iqbal, Muhammad Javed
    Rahim, Aneel
    [J]. IEEE ACCESS, 2018, 6 : 33789 - 33795
  • [3] Conformal Prediction: A Gentle Introduction
    Angelopoulos, Anastasios N.
    Bates, Stephen
    [J]. FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2023, 16 (04): : 494 - 591
  • [4] Angelopoulos AN, 2022, Arxiv, DOI arXiv:2107.07511
  • [5] Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty
    Bhatt, Umang
    Antoran, Javier
    Zhang, Yunfeng
    Liao, Q. Vera
    Sattigeri, Prasanna
    Fogliato, Riccardo
    Melancon, Gabrielle
    Krishnan, Ranganath
    Stanley, Jason
    Tickoo, Omesh
    Nachman, Lama
    Chunara, Rumi
    Srikumar, Madhulika
    Weller, Adrian
    Xiang, Alice
    [J]. AIES '21: PROCEEDINGS OF THE 2021 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, 2021, : 401 - 413
  • [6] Learning When to Treat Business Processes: Prescriptive Process Monitoring with Causal Inference and Reinforcement Learning
    Bozorgi, Zahra Dasht
    Dumas, Marlon
    La Rosa, Marcello
    Polyvyanyy, Artem
    Shoush, Mahmoud
    Teinemaa, Irene
    [J]. ADVANCED INFORMATION SYSTEMS ENGINEERING, CAISE 2023, 2023, 13901 : 364 - 380
  • [7] Predictive maintenance using tree-based classification techniques: A case of railway switches
    Bukhsh, Zaharah Allah
    Saeed, Aaqib
    Stipanovic, Irina
    Doree, Andre G.
    [J]. TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2019, 101 : 35 - 54
  • [8] Visualizing the Feature Importance for Black Box Models
    Casalicchio, Giuseppe
    Molnar, Christoph
    Bischl, Bernd
    [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2018, PT I, 2019, 11051 : 655 - 670
  • [9] Root mean square error (RMSE) or mean absolute error (MAE)? - Arguments against avoiding RMSE in the literature
    Chai, T.
    Draxler, R. R.
    [J]. GEOSCIENTIFIC MODEL DEVELOPMENT, 2014, 7 (03) : 1247 - 1250
  • [10] BART: BAYESIAN ADDITIVE REGRESSION TREES
    Chipman, Hugh A.
    George, Edward I.
    McCulloch, Robert E.
    [J]. ANNALS OF APPLIED STATISTICS, 2010, 4 (01) : 266 - 298