A methodology to guide companies in using Explainable AI-driven interfaces in manufacturing contexts

被引:0
作者
Grandi, Fabio [1 ]
Zanatto, Debora [2 ]
Capaccioli, Andrea [2 ]
Napoletano, Linda [2 ]
Cavallaro, Sara [3 ]
Peruzzini, Margherita [1 ]
机构
[1] Univ Modena & Reggio Emilia, Dept Engn Enzo Ferrari, Via Pietro Vivarelli 10, I-41125 Modena, Italy
[2] Deep Blue Srl, Via Daniele Manin 53, I-00185 Rome, Italy
[3] CNH Ind SpA, Viale Nazioni 55, I-41122 Modena, Italy
来源
5TH INTERNATIONAL CONFERENCE ON INDUSTRY 4.0 AND SMART MANUFACTURING, ISM 2023 | 2024年 / 232卷
基金
欧盟地平线“2020”;
关键词
explainable AI (XAI); artificial intelligence (AI); manufacturing; Human-Machine Interaction; user interface;
D O I
10.1016/j.procs.2024.02.127
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Nowadays, the increasing integration of artificial intelligence (AI) technologies in manufacturing processes is raising the need of users to understand and interpret the decision-making processes of complex AI systems. Traditional black-box AI models often lack transparency, making it challenging for users to comprehend the reasoning behind their outputs. In contrast, Explainable Artificial Intelligence (XAI) techniques provide interpretability by revealing the internal mechanisms of AI models, making them more trustworthy and facilitating human-AI collaboration. In order to promote XAI models' dissemination, this paper proposes a matrix-based methodology to design XAI-driven user interfaces in manufacturing contexts. It helps in mapping the users' needs and identifying the "explainability visualization types" that best fits the end users' requirements for the specific context of use. The proposed methodology was applied in the XMANAI European Project (https://ai4manufacturing.eu), aimed at creating a novel AI platform to support XAI-supported decision making in manufacturing plants. Results showed that the proposed methodology is able to guide companies in the correct implementation of XAI models, realizing the full potential of AI while ensuring human oversight and control. (c) 2023 The Authors. Published by ELSEVIER B.V.
引用
收藏
页码:3112 / 3120
页数:9
相关论文
共 18 条
  • [1] Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda
    Abdul, Ashraf
    Vermeulen, Jo
    Wang, Danding
    Lim, Brian
    Kankanhalli, Mohan
    [J]. PROCEEDINGS OF THE 2018 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI 2018), 2018,
  • [2] [Anonymous], 2015, 1635512015 ISO
  • [3] CHEN T, 2023, EXPLAINABLE ARTIFICI, DOI DOI 10.1080/09593330.2023.2283813
  • [4] Dieber J., 2020, ARXIV201200093
  • [5] Implications Tutorial: Explainable AI in Industry: Practical Challenges and Lessons Learned
    Gade, Krishna
    Geyik, Sahin Cem
    Kenthapadi, Krishnaram
    Mithal, Varun
    Taly, Ankur
    [J]. FAT* '20: PROCEEDINGS OF THE 2020 CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, 2020, : 699 - 699
  • [6] HAUSER JR, 1988, HARVARD BUS REV, V66, P63
  • [7] MLCM: Multi-Label Confusion Matrix
    Heydarian, Mohammadreza
    Doyle, Thomas E.
    Samavi, Reza
    [J]. IEEE ACCESS, 2022, 10 : 19083 - 19095
  • [8] Jin W, 2021, ARXIV210202437
  • [9] Liao Q. V., 2021, ARXIV211010790
  • [10] Lundberg SM, 2017, ADV NEUR IN, V30