A multi-modal explainability approach for human-aware robots in multi-party conversation

被引:0
作者
Beckova, Iveta [1 ]
Pocos, Stefan [1 ]
Belgiovine, Giulia [2 ]
Matarese, Marco [2 ]
Eldardeer, Omar [2 ]
Sciutti, Alessandra [2 ]
Mazzola, Carlo [2 ]
机构
[1] Comenius Univ, Fac Math Phys & Informat, Mlynska Dolina F1, Bratislava 84248, Slovakia
[2] COgNiT Architecture Collaborat Technol Unit, Inst Technol, 83 Via Enr Melen, I-16152 Genoa, Italy
关键词
Human activity recognition; Explainable AI; Transparency; Attention; Human-robot interaction; Addressee estimation; DIMENSIONS;
D O I
10.1016/j.cviu.2025.104304
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The addressee estimation (understanding to whom somebody is talking) is a fundamental task for human activity recognition in multi-party conversation scenarios. Specifically, in the field of human-robot interaction, it becomes even more crucial to enable social robots to participate in such interactive contexts. However, it is usually implemented as a binary classification task, restricting the robot's capability to estimate whether it was addressed or not, which limits its interactive skills. For a social robot to gain the trust of humans, it is also important to manifest a certain level of transparency and explainability. Explainable artificial intelligence thus plays a significant role in the current machine learning applications and models, to provide explanations for their decisions besides excellent performance. In our work, we (a) present an addressee estimation model with improved performance in comparison with the previous state-of-the-art; (b) further modify this model to include inherently explainable attention-based segments; (c) implement the explainable addressee estimation as part of a modular cognitive architecture for multi-party conversation in an iCub robot; (d) validate the real-time performance of the explainable model in multi-party human-robot interaction; (e) propose several ways to incorporate explainability and transparency in the aforementioned architecture; and (f) perform an online user study to analyze the effect of various explanations on how human participants perceive the robot.
引用
收藏
页数:22
相关论文
共 90 条
[1]   A Multi-party Conversational Social Robot Using LLMs [J].
Addlesee, Angus ;
Cherakara, Neeraj ;
Nelson, Nivan ;
Garcia, Daniel Hernandez ;
Gunson, Nancie ;
Sieinska, Weronika ;
Romeo, Marta ;
Dondrup, Christian ;
Lemon, Oliver .
COMPANION OF THE 2024 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, HRI 2024 COMPANION, 2024, :1273-1275
[2]  
Agarwal C, 2024, Arxiv, DOI arXiv:2402.04614
[3]  
Akinson RC., 1968, PSYCHOL LEARN MOTIV, V2, P89
[4]   Multimodal Attentive Learning for Real-time Explainable Emotion Recognition in Conversations [J].
Arumugam, Balaji ;
Das Bhattacharjee, Sreyasee ;
Yuan, Junsong .
2022 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS 22), 2022, :1210-1214
[5]  
Auer P., 2018, Gaze, Addressee Selection and Turn- Taking in Three-Party Interaction, P197
[6]   Real-Time Multimodal Turn-taking Prediction to Enhance Cooperative Dialogue during Human-Agent Interaction [J].
Bae, Young-Ho ;
Bennett, Casey C. .
2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, 2023, :2037-2044
[7]   Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI [J].
Barredo Arrieta, Alejandro ;
Diaz-Rodriguez, Natalia ;
Del Ser, Javier ;
Bennetot, Adrien ;
Tabik, Siham ;
Barbado, Alberto ;
Garcia, Salvador ;
Gil-Lopez, Sergio ;
Molina, Daniel ;
Benjamins, Richard ;
Chatila, Raja ;
Herrera, Francisco .
INFORMATION FUSION, 2020, 58 :82-115
[8]   HRI Framework for Continual Learning in Face Recognition [J].
Belgiovine, Giulia ;
Gonzalez-Billandon, Jonas ;
Sciutti, Alessandra ;
Sandini, Giulio ;
Rea, Francesco .
2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, :8226-8233
[9]   The (Fe)male Robot: How Robot Body Shape Impacts First Impressions and Trust Towards Robots [J].
Bernotat, Jasmin ;
Eyssel, Friederike ;
Sachse, Janik .
INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2021, 13 (03) :477-489
[10]  
Bewley A, 2016, IEEE IMAGE PROC, P3464, DOI 10.1109/ICIP.2016.7533003