Leveraging explainability for understanding object descriptions in ambiguous 3D environments

被引:4
作者
Dogan, Fethiye Irmak [1 ]
Melsion, Gaspar I. [1 ]
Leite, Iolanda [1 ]
机构
[1] KTH Royal Inst Technol, Sch Elect Engn & Comp Sci, Div Robot Percept & Learning, Stockholm, Sweden
基金
瑞典研究理事会;
关键词
explainability; resolving ambiguities; depth; referring expression comprehension (REC); real-world environments;
D O I
10.3389/frobt.2022.937772
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
For effective human-robot collaboration, it is crucial for robots to understand requests from users perceiving the three-dimensional space and ask reasonable follow-up questions when there are ambiguities. While comprehending the users' object descriptions in the requests, existing studies have focused on this challenge for limited object categories that can be detected or localized with existing object detection and localization modules. Further, they have mostly focused on comprehending the object descriptions using flat RGB images without considering the depth dimension. On the other hand, in the wild, it is impossible to limit the object categories that can be encountered during the interaction, and 3-dimensional space perception that includes depth information is fundamental in successful task completion. To understand described objects and resolve ambiguities in the wild, for the first time, we suggest a method leveraging explainability. Our method focuses on the active areas of an RGB scene to find the described objects without putting the previous constraints on object categories and natural language instructions. We further improve our method to identify the described objects considering depth dimension. We evaluate our method in varied real-world images and observe that the regions suggested by our method can help resolve ambiguities. When we compare our method with a state-of-the-art baseline, we show that our method performs better in scenes with ambiguous objects which cannot be recognized by existing object detectors. We also show that using depth features significantly improves performance in scenes where depth data is critical to disambiguate the objects and across our evaluation dataset that contains objects that can be specified with and without the depth dimension.
引用
收藏
页数:19
相关论文
共 67 条
[11]  
Dagan FI, 2019, IEEE INT C INT ROBOT, P4992, DOI [10.1109/IROS40897.2019.8968510, 10.1109/iros40897.2019.8968510]
[12]   Explainable AI for Robot Failures: Generating Explanations that Improve User Assistance in Fault Recovery [J].
Das, Devleena ;
Banerjee, Siddhartha ;
Chernova, Sonia .
2021 16TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, HRI, 2021, :351-360
[13]   Vision-Language Transformer and Query Generation for Referring Segmentation [J].
Ding, Henghui ;
Liu, Chang ;
Wang, Suchen ;
Jiang, Xudong .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :16301-16310
[14]   Asking Follow-Up Clarifications to Resolve Ambiguities in Human-Robot Conversation [J].
Dogan, Fethiye Irmak ;
Torre, Ilaria ;
Leite, Iolanda .
PROCEEDINGS OF THE 2022 17TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI '22), 2022, :461-469
[15]   A tale of two explanations: Enhancing human trust by explaining robot behavior [J].
Edmonds, Mark ;
Gao, Feng ;
Liu, Hangxin ;
Xie, Xu ;
Qi, Siyuan ;
Rothrock, Brandon ;
Zhu, Yixin ;
Wu, Ying Nian ;
Lu, Hongjing ;
Zhu, Song-Chun .
SCIENCE ROBOTICS, 2019, 4 (37)
[16]  
Ehsan Upol, 2020, HCI International 2020 - Late Breaking Papers. Multimodality and Intelligence. 22nd HCI International Conference, HCII 2020. Proceedings. Lecture Notes in Computer Science (LNCS 12424), P449, DOI 10.1007/978-3-030-60117-1_33
[17]   A Survey of Methods for Explaining Black Box Models [J].
Guidotti, Riccardo ;
Monreale, Anna ;
Ruggieri, Salvatore ;
Turin, Franco ;
Giannotti, Fosca ;
Pedreschi, Dino .
ACM COMPUTING SURVEYS, 2019, 51 (05)
[18]   The Need for Verbal Robot Explanations and How People Would Like a Robot to Explain Itself [J].
Han, Zhao ;
Phillips, Elizabeth ;
Yanco, Holly A. .
ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION, 2021, 10 (04)
[19]  
Hatori J, 2018, IEEE INT CONF ROBOT, P3774
[20]  
He KM, 2017, IEEE I CONF COMP VIS, P2980, DOI [10.1109/ICCV.2017.322, 10.1109/TPAMI.2018.2844175]