Neuro-Symbolic Reasoning for Multimodal Referring Expression Comprehension in HMI Systems

被引:0
作者
Jain, Aman [1 ,2 ]
Kondapally, Anirudh Reddy [1 ,2 ]
Yamada, Kentaro [2 ]
Yanaka, Hitomi [1 ]
机构
[1] Univ Tokyo, Tokyo, Japan
[2] Honda Res & Dev Co Ltd, Tokyo, Japan
关键词
Human-machine interaction; Referring expression comprehension; Neuro-symbolic reasoning; Multimodal learning; POINTING BEHAVIOR;
D O I
10.1007/s00354-024-00243-8
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Conventional Human-Machine Interaction (HMI) interfaces have predominantly relied on GUI and voice commands. However, natural human communication also consists of non-verbal communication, including hand gestures like pointing. Thus, recent works in HMI systems have tried to incorporate pointing gestures as an input, making significant progress in recognizing and integrating them with voice commands. However, existing approaches often treat these input modalities independently, limiting their capacity to handle complex multimodal instructions requiring intricate reasoning of language and gestures. On the other hand, multimodal tasks requiring complex reasoning are being challenged in the language and vision domain, but these typically do not include gestures like pointing. To bridge this gap, we explore one of the challenging multimodal tasks, called Referring Expression Comprehension (REC), within multimodal HMI systems incorporating pointing gestures. We present a virtual setup in which a robot shares an environment with a user and is tasked with identifying objects based on the user's language and gestural instructions. Furthermore, to address this challenge, we propose a hybrid neuro-symbolic model combining deep learning's versatility with symbolic reasoning's interpretability. Our contributions include a challenging multimodal REC dataset for HMI systems, an interpretable neuro-symbolic model, and an assessment of its ability to generalize the reasoning to unseen environments, complemented by an in-depth qualitative analysis of the model's inner workings.
引用
收藏
页码:579 / 598
页数:20
相关论文
共 51 条
[1]  
Agrawal Aishwarya, 2016, ARXIV
[2]   VQA: Visual Question Answering [J].
Antol, Stanislaw ;
Agrawal, Aishwarya ;
Lu, Jiasen ;
Mitchell, Margaret ;
Batra, Dhruv ;
Zitnick, C. Lawrence ;
Parikh, Devi .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :2425-2433
[3]  
Bojanowski P., 2017, Trans. Assoc. Comput. Linguistics, V5, P135, DOI [DOI 10.1162/TACLA00051, 10.1162/tacl_a_00051, DOI 10.1162/TACL_A_00051]
[4]  
Bolt R. A., 1980, Computer Graphics, V14, P262, DOI 10.1145/965105.807503
[5]  
Bougares Fethi, 2018, P 3 C MACH TRANSL SH, P304
[6]  
Butterworth G, 2003, POINTING: WHERE LANGAUAGE, CULTURE, AND COGNITON MEET, P9
[7]  
Calli B, 2015, PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS (ICAR), P510, DOI 10.1109/ICAR.2015.7251504
[8]  
Chen Y., 2021, IEEE INT C COMPUTER
[9]  
Chen Yen-Chun, 2020, ECCV
[10]   Human Attention in Visual Question Answering: Do Humans and Deep Networks Look at the Same Regions? [J].
Das, Abhishek ;
Agrawal, Harsh ;
Zitnick, Larry ;
Parikh, Devi ;
Batra, Dhruv .
COMPUTER VISION AND IMAGE UNDERSTANDING, 2017, 163 :90-100