Asking Follow-Up Clarifications to Resolve Ambiguities in Human-Robot Conversation

被引:12
作者
Dogan, Fethiye Irmak [1 ]
Torre, Ilaria [1 ]
Leite, Iolanda [1 ]
机构
[1] KTH Royal Inst Technol, Stockholm, Sweden
来源
PROCEEDINGS OF THE 2022 17TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI '22) | 2022年
基金
瑞典研究理事会;
关键词
Resolving Ambiguities; Follow-Up Clarifications; Referring Expressions; LANGUAGE;
D O I
10.1109/HRI53351.2022.9889368
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
When a robot aims to comprehend its human partner's request by identifying the referenced objects in Human-Robot Conversation, ambiguities can occur because the environment might contain many similar objects or the objects described in the request might be unknown to the robot. In the case of ambiguities, most of the systems ask users to repeat their request, which assumes that the robot is familiar with all of the objects in the environment. This assumption might lead to task failure, especially in complex real-world environments. In this paper, we address this challenge by presenting an interactive system that asks for follow-up clarifications to disambiguate the described objects using the pieces of information that the robot could understand from the request and the objects in the environment that are known to the robot. To evaluate our system while disambiguating the referenced objects, we conducted a user study with 63 participants. We analyzed the interactions when the robot asked for clarifications and when it asked users to re-describe the same object. Our results show that generating follow-up clarification questions helped the robot correctly identify the described objects with fewer attempts (i.e., conversational turns). Also, when people were asked clarification questions, they perceived the task as easier, and they evaluated the task understanding and competence of the robot as higher. Our code and anonymized dataset are publicly available(1).
引用
收藏
页码:461 / 469
页数:9
相关论文
共 34 条
[1]  
Amiri S, 2019, IEEE INT C INT ROBOT, P744, DOI [10.1109/iros40897.2019.8968269, 10.1109/IROS40897.2019.8968269]
[2]  
[Anonymous], 2017, RSS WORKSHOP HUMAN C
[3]   Toward a Framework for Levels of Robot Autonomy in Human-Robot Interaction [J].
Beer, Jenay M. ;
Fisk, Arthur D. ;
Rogers, Wendy A. .
JOURNAL OF HUMAN-ROBOT INTERACTION, 2014, 3 (02) :74-99
[4]   End-to-End Object Detection with Transformers [J].
Carion, Nicolas ;
Massa, Francisco ;
Synnaeve, Gabriel ;
Usunier, Nicolas ;
Kirillov, Alexander ;
Zagoruyko, Sergey .
COMPUTER VISION - ECCV 2020, PT I, 2020, 12346 :213-229
[5]   The Robotic Social Attributes Scale (RoSAS): Development and Validation [J].
Carpinella, Colleen M. ;
Wyman, Alisa B. ;
Perez, Michael A. ;
Stroessner, Steven J. .
PROCEEDINGS OF THE 2017 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI'17), 2017, :254-262
[6]  
Dagan FI, 2019, IEEE INT C INT ROBOT, P4992, DOI [10.1109/IROS40897.2019.8968510, 10.1109/iros40897.2019.8968510]
[7]  
Dogan FI, 2021, Arxiv, DOI arXiv:2107.05593
[8]  
Gabsdil Malte, 2003, P 2003 AAAI SPRING S, P28
[9]  
Graves A, 2012, STUD COMPUT INTELL, V385, P1, DOI [10.1162/neco.1997.9.1.1, 10.1007/978-3-642-24797-2]
[10]  
Guadarrama S, 2013, IEEE INT C INT ROBOT, P1640, DOI 10.1109/IROS.2013.6696569