Explaining in Time: Meeting Interactive Standards of Explanation for Robotic Systems

被引:10
作者
Arnold, Thomas [1 ]
Kasenberg, Daniel [1 ]
Scheutz, Matthias [1 ]
机构
[1] Tufts Univ, 200 Boston Ave, Medford, MA 02155 USA
关键词
MODELS;
D O I
10.1145/3457183
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Explainability has emerged as a critical AI research objective, but the breadth of proposed methods and application domains suggest that criteria for explanation vary greatly. In particular, what counts as a good explanation, and what kinds of explanation are computationally feasible, has become trickier in light of oqaque "black box" systems such as deep neural networks. Explanation in such cases has drifted from what many philosophers stipulated as having to involve deductive and causal principles to mere "interpretation," which approximates what happened in the target system to varying degrees. However, such post hoc constructed rationalizations are highly problematic for social robots that operate interactively in spaces shared with humans. For in such social contexts, explanations of behavior, and, in particular, justifications for violations of expected behavior, should make reference to socially accepted principles and norms. In this article, we show how a social robot's actions can face explanatory demands for how it came to act on its decision, what goals, tasks, or purposes its design had those actions pursue and what norms or social constraints the system recognizes in the course of its action. As a result, we argue that explanations for social robots will need to be accurate representations of the system's operation along causal, purposive, and justificatory lines. These explanations will need to generate appropriate references to principles and norms-explanations based on mere "interpretability" will ultimately fail to connect the robot's behaviors to its appropriate determinants. We then lay out the foundations for a cognitive robotic architecture for HRI, together with particular component algorithms, for generating explanations and engaging in justificatory dialogues with human interactants. Such explanations track the robot's actual decision-making and behavior, which themselves are determined by normative principles the robot can describe and use for justifications.
引用
收藏
页数:23
相关论文
共 57 条
[1]  
Amodei D., 2016, ARXIV, DOI 10.48550/ARXIV.1606.06565
[2]  
[Anonymous], 2006, AAAI
[3]   Improving palliative care with deep learning [J].
Avati, Anand ;
Jung, Kenneth ;
Harman, Stephanie ;
Downing, Lance ;
Ng, Andrew ;
Shah, Nigam H. .
BMC MEDICAL INFORMATICS AND DECISION MAKING, 2018, 18
[4]  
Briggs GordonMichael., 2013, AAAI
[5]   Toward a general logicist methodology for engineering ethically correct robots [J].
Bringsjord, Selmer ;
Arkoudas, Konstantine ;
Bello, Paul .
IEEE INTELLIGENT SYSTEMS, 2006, 21 (04) :38-44
[6]  
Chakraborti T, 2019, ACMIEEE INT CONF HUM, P258, DOI 10.1109/HRI.2019.8673193
[7]   Experimental Philosophy of Explanation Rising: The Case for a Plurality of Concepts of Explanation [J].
Colombo, Matteo .
COGNITIVE SCIENCE, 2017, 41 (02) :503-517
[8]  
Dosilovic FK, 2018, 2018 41ST INTERNATIONAL CONVENTION ON INFORMATION AND COMMUNICATION TECHNOLOGY, ELECTRONICS AND MICROELECTRONICS (MIPRO), P210, DOI 10.23919/MIPRO.2018.8400040
[9]  
Dzifcak J, 2009, IEEE INT CONF ROBOT, P3768
[10]  
Feiyu Xu, 2019, Natural Language Processing and Chinese Computing. 8th CCF International Conference, NLPCC 2019. Proceedings. Lecture Notes in Artificial Intelligence, Subseries of Lecture Notes in Computer Science (LNAI 11839), P563, DOI 10.1007/978-3-030-32236-6_51