Explainability for Human-Robot Collaboration

被引:1
作者
Yadollahi, Elmira [1 ]
Romeo, Marta [2 ]
Dogan, Fethiye Irmak [1 ]
Johal, Wafa [3 ]
De Graaf, Maartje [4 ]
Levy-Tzedek, Shelly [5 ]
Leite, Iolanda [1 ]
机构
[1] KTH Royal Inst Technol, Stockholm, Sweden
[2] Heriot Watt Univ, Edinburgh, Scotland
[3] Univ Melbourne, Melbourne, Vic, Australia
[4] Univ Utrecht, Utrecht, Netherlands
[5] Ben Gurion Univ Negev, Beer Sheva, Israel
来源
COMPANION OF THE 2024 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, HRI 2024 COMPANION | 2024年
基金
澳大利亚研究理事会;
关键词
Explainable Robotics; XAI; Human-Centered Robot Explanations;
D O I
10.1145/3610978.3638154
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In human-robot collaboration, explainability bridges the communication gap between complex machine functionalities and humans. An active area of investigation in robotics and AI is understanding and generating explanations that can enhance collaboration and mutual understanding between humans and machines. A key to achieving such seamless collaborations is understanding end-users, whether naive or expert, and tailoring explanation features that are intuitive, user-centred, and contextually relevant. Advancing on the topic not only includes modelling humans' expectations for generating the explanations but also requires the development of metrics to evaluate generated explanations and assess how effectively autonomous systems communicate their intentions, actions, and decision-making rationale. This workshop is designed to tackle the nuanced role of explainability in enhancing the efficiency, safety, and trust in human-robot collaboration. It aims to initiate discussions on the importance of generating and evaluating explainability features developed in autonomous agents. Simultaneously, it addresses various challenges, including bias in explainability and downsides of explainability and deception in human-robot interaction.
引用
收藏
页码:1364 / 1366
页数:3
相关论文
共 14 条
[1]  
Chakraborti T, 2017, PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P156
[2]  
De Graaf Maartje MA, 2017, P 2017 AAAI FALL S S, DOI DOI 10.1108/K-05-2022-0679
[3]   Leveraging explainability for understanding object descriptions in ambiguous 3D environments [J].
Dogan, Fethiye Irmak ;
Melsion, Gaspar I. ;
Leite, Iolanda .
FRONTIERS IN ROBOTICS AND AI, 2023, 9
[4]  
Dragan AD, 2013, ACMIEEE INT CONF HUM, P301, DOI 10.1109/HRI.2013.6483603
[5]   A tale of two explanations: Enhancing human trust by explaining robot behavior [J].
Edmonds, Mark ;
Gao, Feng ;
Liu, Hangxin ;
Xie, Xu ;
Qi, Siyuan ;
Rothrock, Brandon ;
Zhu, Yixin ;
Wu, Ying Nian ;
Lu, Hongjing ;
Zhu, Song-Chun .
SCIENCE ROBOTICS, 2019, 4 (37)
[6]   The Need for Verbal Robot Explanations and How People Would Like a Robot to Explain Itself [J].
Han, Zhao ;
Phillips, Elizabeth ;
Yanco, Holly A. .
ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION, 2021, 10 (04)
[7]  
Matarese M, 2021, Arxiv, DOI arXiv:2109.12912
[8]   Multimodal Explanations: Justifying Decisions and Pointing to the Evidence [J].
Park, Dong Huk ;
Hendricks, Lisa Anne ;
Akata, Zeynep ;
Rohrbach, Anna ;
Schiele, Bernt ;
Darrell, Trevor ;
Rohrbach, Marcus .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :8779-8788
[9]  
Phaijit Ornnalin, 2023, P 9 INT C HUM AG INT, P31
[10]   Taking a HINT: Leveraging Explanations to Make Vision and Language Models More Grounded [J].
Selvaraju, Ramprasaath R. ;
Lee, Stefan ;
Shen, Yilin ;
Jin, Hongxia ;
Ghosh, Shalini ;
Heck, Larry ;
Batra, Dhruv ;
Parikh, Devi .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :2591-2600