Towards Responsible AI: Developing Explanations to Increase Human-AI Collaboration

被引:2
作者
De Brito Duarte, Regina [1 ]
机构
[1] Inst Super Tecn, Lisbon, Portugal
来源
HHAI 2023: AUGMENTING HUMAN INTELLECT | 2023年 / 368卷
关键词
Human-AI Interaction; Human-AI Collaboration; AI Trust; Explainable AI;
D O I
10.3233/FAIA230126
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Most current XAI models are primarily designed to verify input-output relationships of AI models, without considering context. This objective may not always align with the goals of Human-AI collaboration, which aim to enhance team performance and establish appropriate levels of trust. Developing XAI models that can promote justified trust is therefore still a challenge in the AI field, but it is a crucial step towards responsible AI. The focus of this research is to develop an XAI model optimized for human-AI collaboration, with a specific goal of generating explanations that improve understanding of the AI system's limitations and increase warranted trust in it. To achieve this goal, a user experiment was conducted to analyze the effects of including explanations in the decision-making process on AI trust.
引用
收藏
页码:470 / 482
页数:13
相关论文
共 26 条
  • [1] Ashktorab Z, 2021, Effects of Communication Directionality and AI Agent Differences in Human-AI Interaction, DOI [10.1145/3411764.3445256, DOI 10.1145/3411764.3445256]
  • [2] Bansal Gagan, 2021, CHI '21: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, DOI 10.1145/3411764.3445717
  • [3] Bansal G, 2019, AAAI CONF ARTIF INTE, P2429
  • [4] Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
    Barredo Arrieta, Alejandro
    Diaz-Rodriguez, Natalia
    Del Ser, Javier
    Bennetot, Adrien
    Tabik, Siham
    Barbado, Alberto
    Garcia, Salvador
    Gil-Lopez, Sergio
    Molina, Daniel
    Benjamins, Richard
    Chatila, Raja
    Herrera, Francisco
    [J]. INFORMATION FUSION, 2020, 58 : 82 - 115
  • [5] Goyal Y, 2019, PR MACH LEARN RES, V97
  • [6] Factual and Counterfactual Explanations for Black Box Decision Making
    Guidotti, Riccardo
    Monreale, Anna
    Giannotti, Fosca
    Pedreschi, Dino
    Ruggieri, Salvatore
    Turini, Franco
    [J]. IEEE INTELLIGENT SYSTEMS, 2019, 34 (06) : 14 - 22
  • [7] Honeycutt D., 2020, P AAAI C HUMAN COMPU, V8, P63, DOI DOI 10.48448/1K5F-C718
  • [8] Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI
    Jacovi, Alon
    Marasovic, Ana
    Miller, Tim
    Goldberg, Yoav
    [J]. PROCEEDINGS OF THE 2021 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2021, 2021, : 624 - 635
  • [9] Jauernig J, 2021, SSRN 3857292
  • [10] Kamar E., 2019, Proc. AAAI Conf. Hum. Comput. Crowdsourc, V7, P2, DOI [10.1609/hcomp.v7i1.5285, DOI 10.1609/HCOMP.V7I1.5285]