Subgoal-Based Explanations for Unreliable Intelligent Decision Support Systems

被引:6
作者
Das, Devleena [1 ]
Kim, Been [2 ]
Chernova, Sonia [1 ]
机构
[1] Georgia Inst Technol, Atlanta, GA 30332 USA
[2] Google Res, Mountain View, CA USA
来源
PROCEEDINGS OF 2023 28TH ANNUAL CONFERENCE ON INTELLIGENT USER INTERFACES, IUI 2023 | 2023年
关键词
Explainable AI; Intelligent Decision Support Systems; Planning;
D O I
10.1145/3581641.3584055
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Intelligent decision support (IDS) systems leverage artificial intelligence techniques to generate recommendations that guide human users through the decision making phases of a task. However, a key challenge is that IDS systems are not perfect, and in complex real-world scenarios may produce suboptimal output or fail to work altogether. The field of explainable AI (XAI) has sought to develop techniques that improve the interpretability of black-box systems. While most XAI work has focused on single-classification tasks, the subfield of explainable AI planning (XAIP) has sought to develop techniques that make sequential decision making AI systems explainable to domain experts. Critically, prior work in applying XAIP techniques to IDS systems has assumed that the plan being proposed by the planner is always optimal, and therefore the action or plan being recommended as decision support to the user is always optimal. In this work, we examine novice user interactions with a non-robust IDS system - one that occasionally recommends suboptimal actions, and one that may become unavailable after users have become accustomed to its guidance. We introduce a new explanation type, subgoal-based explanations, for plan-based IDS systems, that supplements traditional IDS output with information about the subgoal toward which the recommended action would contribute. We demonstrate that subgoal-based explanations lead to improved user task performance in the presence of IDS recommendations, improve user ability to distinguish optimal and suboptimal IDS recommendations, and are preferred by users. Additionally, we demonstrate that subgoal-based explanations enable more robust user performance in the case of IDS failure, showing the significant benefit of training users for an underlying task with subgoal-based explanations.
引用
收藏
页码:240 / 250
页数:11
相关论文
共 48 条
[1]   Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) [J].
Adadi, Amina ;
Berrada, Mohammed .
IEEE ACCESS, 2018, 6 :52138-52160
[2]  
Adebayo J, 2020, Arxiv, DOI arXiv:2011.05429
[3]  
[Anonymous], 2009, ICAPS
[4]  
Arnold V., 2004, Accounting and Finance, V44, P1, DOI DOI 10.1111/J.1467-629X.2004.00099.X
[5]  
Canal G., 2021, ICAPS 2021 WORKSH EX
[6]  
Carroll Micah, 2019, NEURIPS
[7]  
Chakraborti T, 2017, Arxiv, DOI arXiv:1701.08317
[8]  
Chakraborti T, 2020, PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P4803
[9]   'Obsessed with goals': Functions and mechanisms of teleological interpretation of actions in humans [J].
Csibra, Gergely ;
Gergely, Gyorgy .
ACTA PSYCHOLOGICA, 2007, 124 (01) :60-78
[10]  
Czechowski K, 2021, ADV NEUR IN