Levels of explainable artificial intelligence for human-aligned conversational explanations

被引:67
作者
Dazeley, Richard [1 ]
Vamplew, Peter [2 ]
Foale, Cameron [2 ]
Young, Charlotte [2 ]
Aryal, Sunil [1 ]
Cruz, Francisco [1 ]
机构
[1] Deakin Univ, Sch Informat Technol, Locked Bag 20000, Geelong, Vic 3220, Australia
[2] Federation Univ, Sch Engn Informat Technol & Phys Sci, Ballarat, Vic 3353, Australia
关键词
Explainable Artificial Intelligence (XAI); Broad-XAI; Interpretable Machine Learning (IML); Artificial General Intelligence (AGI); Human-Computer Interaction (HCI); ACTION RECOGNITION; ROBOT; REPRESENTATION; ALGORITHM; FRAMEWORK; EMOTION; SYSTEMS; AGENTS; KNOWLEDGE; INFERENCE;
D O I
10.1016/j.artint.2021.103525
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Over the last few years there has been rapid research growth into eXplainable Artificial Intelligence (XAI) and the closely aligned Interpretable Machine Learning (IML). Drivers for this growth include recent legislative changes and increased investments by industry and governments, along with increased concern from the general public. People are affected by autonomous decisions every day and the public need to understand the decision-making process to accept the outcomes. However, the vast majority of the applications of XAI/IML are focused on providing low-level 'narrow' explanations of how an individual decision was reached based on a particular datum. While important, these explanations rarely provide insights into an agent's: beliefs and motivations; hypotheses of other (human, animal or AI) agents' intentions; interpretation of external cultural expectations; or, processes used to generate its own explanation. Yet all of these factors, we propose, are essential to providing the explanatory depth that people require to accept and trust the AI's decision-making. This paper aims to define levels of explanation and describe how they can be integrated to create a human-aligned conversational explanation system. In so doing, this paper will survey current approaches and discuss the integration of different technologies to achieve these levels with Broad eXplainable Artificial Intelligence (Broad-XAI), and thereby move towards high-level 'strong' explanations. (C) 2021 Elsevier B.V. All rights reserved.
引用
收藏
页数:29
相关论文
共 248 条
  • [1] Aafaq N., 2018, ARXIV180600186
  • [2] Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda
    Abdul, Ashraf
    Vermeulen, Jo
    Wang, Danding
    Lim, Brian
    Kankanhalli, Mohan
    [J]. PROCEEDINGS OF THE 2018 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI 2018), 2018,
  • [3] Abels Axel, 2018, ARXIV180907803
  • [4] Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
    Adadi, Amina
    Berrada, Mohammed
    [J]. IEEE ACCESS, 2018, 6 : 52138 - 52160
  • [5] BDI agents in social simulations: a survey
    Adam, Carole
    Gaudou, Benoit
    [J]. KNOWLEDGE ENGINEERING REVIEW, 2016, 31 (03) : 207 - 238
  • [6] Mapping the Landscape of Human-Level Artificial General Intelligence
    Adams, Sam S.
    Arel, Itamar
    Bach, Joscha
    Coop, Robert
    Furlan, Rod
    Goertzel, Ben
    Hall, J. Storrs
    Samsonovich, Alexei
    Scheutz, Matthias
    Schlesinger, Matthew
    Shapiro, Stuart C.
    Sowa, John F.
    [J]. AI MAGAZINE, 2012, 33 (01) : 25 - 41
  • [7] Aineto Diego, 2019, P INT C AUT PLANN SC, V29, P13
  • [8] Modeling pedestrian walking speeds on sidewalks
    Al-Azzawi, Marwan
    Raeside, Robert
    [J]. JOURNAL OF URBAN PLANNING AND DEVELOPMENT, 2007, 133 (03) : 211 - 219
  • [9] Mental Models of Mere Mortals with Explanations of Reinforcement Learning
    Anderson, Andrew
    Dodge, Jonathan
    Sadarangani, Amrita
    Juozapaitis, Zoe
    Newman, Evan
    Irvine, Jed
    Chattopadhyay, Souti
    Olson, Matthew
    Fern, Alan
    Burnett, Margaret
    [J]. ACM TRANSACTIONS ON INTERACTIVE INTELLIGENT SYSTEMS, 2020, 10 (02)
  • [10] Survey and critique of techniques for extracting rules from trained artificial neural networks
    Andrews, R
    Diederich, J
    Tickle, AB
    [J]. KNOWLEDGE-BASED SYSTEMS, 1995, 8 (06) : 373 - 389