Assessing Model Requirements for Explainable AI: A Template and Exemplary Case Study

被引:0
作者
Heider, Michael [1 ]
Stegherr, Helena [1 ]
Nordsieck, Richard [2 ]
Haehner, Joerg [1 ]
机构
[1] Univ Augsburg, Organ Comp Grp, Augsburg, Germany
[2] Xitaso GmbH, IT & Software Solut, Augsburg, Germany
关键词
Rule-based learning; self-explaining; decision support; sociotechnical system; learning classifier system; explainable AI; KNOWLEDGE;
D O I
10.1162/artl_a_00414
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In sociotechnical settings, human operators are increasingly assisted by decision support systems. By employing such systems, important properties of sociotechnical systems, such as self-adaptation and self-optimization, are expected to improve further. To be accepted by and engage efficiently with operators, decision support systems need to be able to provide explanations regarding the reasoning behind specific decisions. In this article, we propose the use of learning classifier systems (LCSs), a family of rule-based machine learning methods, to facilitate and highlight techniques to improve transparent decision-making. Furthermore, we present a novel approach to assessing application-specific explainability needs for the design of LCS models. For this, we propose an application-independent template of seven questions. We demonstrate the approach's use in an interview-based case study for a manufacturing scenario. We find that the answers received do yield useful insights for a well-designed LCS model and requirements for stakeholders to engage actively with an intelligent agent.
引用
收藏
页码:468 / 486
页数:19
相关论文
共 21 条
  • [1] Bloat control and generalization pressure using the minimum description length principle for a Pittsburgh approach learning classifier system
    Bacardit, Jaume
    Garrell, Josep Maria
    [J]. LEARNING CLASSIFIER SYSTEMS, 2007, 4399 : 59 - 79
  • [2] The intersection of Evolutionary Computation and Explainable AI
    Bacardit, Jaume
    Brownlee, Alexander E. I.
    Cagnoni, Stefano
    Iacca, Giovanni
    McCall, John
    Walker, David
    [J]. PROCEEDINGS OF THE 2022 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE COMPANION, GECCO 2022, 2022, : 1757 - 1762
  • [3] Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
    Barredo Arrieta, Alejandro
    Diaz-Rodriguez, Natalia
    Del Ser, Javier
    Bennetot, Adrien
    Tabik, Siham
    Barbado, Alberto
    Garcia, Salvador
    Gil-Lopez, Sergio
    Molina, Daniel
    Benjamins, Richard
    Chatila, Raja
    Herrera, Francisco
    [J]. INFORMATION FUSION, 2020, 58 : 82 - 115
  • [4] Principles and Practice of Explainable Machine Learning
    Belle, Vaishak
    Papantonis, Ioannis
    [J]. FRONTIERS IN BIG DATA, 2021, 4
  • [5] Bull L., 2002, P 4 ANN C GEN EV COM, P905
  • [6] Stop ordering machine learning algorithms by their explainability! A user-centered investigation of performance and explainability
    Herm, Lukas-Valentin
    Heinrich, Kai
    Wanner, Jonas
    Janiesch, Christian
    [J]. INTERNATIONAL JOURNAL OF INFORMATION MANAGEMENT, 2023, 69
  • [7] Reusing Building Blocks of Extracted Knowledge to Solve Complex, Large-Scale Boolean Problems
    Iqbal, Muhammad
    Browne, Will N.
    Zhang, Mengjie
    [J]. IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2014, 18 (04) : 465 - 480
  • [8] Lanzi PL, 2006, IEEE C EVOL COMPUTAT, P2255
  • [9] Liu Y., 2021, ACM Trans. Evol. Learn. Optim., V1, P1
  • [10] Visualizations for rule-based machine learning
    Liu, Yi
    Browne, Will N.
    Xue, Bing
    [J]. NATURAL COMPUTING, 2022, 21 (02) : 243 - 264