From Explainable AI to Explainable Simulation: Using Machine Learning and XAI to understand System Robustness

被引:1
作者
Feldkamp, Niclas [1 ]
Strassburger, Steffen [1 ]
机构
[1] Tech Univ Ilmenau, Ilmenau, Germany
来源
PROCEEDINGS OF THE 2023 ACM SIGSIM INTERNATIONAL CONFERENCE ON PRINCIPLES OF ADVANCED DISCRETE SIMULATION, ACMSIGSIM-PADS 2023 | 2023年
关键词
machine learning; deep learning; robustness optimization; simulation; explainable AI; XAI; DESIGN;
D O I
10.1145/3573900.3591114
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Evaluating robustness is an important goal in simulation-based analysis. Robustness is achieved when the controllable factors of a system are adjusted in such a way that any possible variance in uncontrollable factors (noise) has minimal impact on the variance of the desired output. The optimization of system robustness using simulation is a dedicated and well-established research direction. However, once a simulation model is available, there is a lot of potential to learn more about the inherent relationships in the system, especially regarding its robustness. Data farming offers the possibility to explore large design spaces using smart experiment design, high performance computing, automated analysis, and interactive visualization. Sophisticated machine learning methods excel at recognizing and modelling the relation between large amounts of simulation input and output data. However, investigating and analyzing this modelled relationship can be very difficult, since most modern machine learning methods like neural networks or random forests are opaque black boxes. Explainable Artificial Intelligence (XAI) can help to peak into this black box, helping us to explore and learn about relations between simulation input and output. In this paper, we introduce a concept for using Data Farming, machine learning and XAI to investigate and understand system robustness of a given simulation model.
引用
收藏
页码:96 / 106
页数:11
相关论文
共 48 条
[1]  
Alber M, 2019, J MACH LEARN RES, V20
[2]   Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI [J].
Barredo Arrieta, Alejandro ;
Diaz-Rodriguez, Natalia ;
Del Ser, Javier ;
Bennetot, Adrien ;
Tabik, Siham ;
Barbado, Alberto ;
Garcia, Salvador ;
Gil-Lopez, Sergio ;
Molina, Daniel ;
Benjamins, Richard ;
Chatila, Raja ;
Herrera, Francisco .
INFORMATION FUSION, 2020, 58 :82-115
[3]   Principles and Practice of Explainable Machine Learning [J].
Belle, Vaishak ;
Papantonis, Ioannis .
FRONTIERS IN BIG DATA, 2021, 4
[4]  
Dosilovic FK, 2018, 2018 41ST INTERNATIONAL CONVENTION ON INFORMATION AND COMMUNICATION TECHNOLOGY, ELECTRONICS AND MICROELECTRONICS (MIPRO), P210, DOI 10.23919/MIPRO.2018.8400040
[5]  
Feldkamp N., 2015, P 3 ACM SIGSIM C PRI, P3
[6]   EXPLAINABLE AI FOR DATA FARMING OUTPUT ANALYSIS: A USE CASE FOR KNOWLEDGE GENERATION THROUGH BLACK-BOX CLASSIFIERS [J].
Feldkamp, Niclas ;
Genath, Jonas ;
Strassburger, Steffen .
2022 WINTER SIMULATION CONFERENCE (WSC), 2022, :1152-1163
[7]   Knowledge Discovery in Simulation Data [J].
Feldkamp, Niclas ;
Bergmann, Soeren ;
Strassburger, Steffen .
ACM TRANSACTIONS ON MODELING AND COMPUTER SIMULATION, 2020, 30 (04)
[8]  
Feldkamp N, 2017, WINT SIMUL C PROC, P3952, DOI 10.1109/WSC.2017.8248105
[9]  
Feldkamp Niclas, 2021, P 2021 WINTER SIMULA
[10]  
Feldkamp Niclas, 2022, A Method Using Generative Adversarial Networks for Robustness Optimization, V32, P1, DOI 10.ACMTrans.Model.Comput.Simul.1145/3503511