Multimodal Integration of Emotional Signals from Voice, Body, and Context: Effects of (In)Congruence on Emotion Recognition and Attitudes Towards Robots

被引:101
作者
Tsiourti, Christiana [1 ]
Weiss, Astrid [2 ]
Wac, Katarzyna [3 ]
Vincze, Markus [1 ]
机构
[1] TU Wien, Vision4Robot Grp, Automat & Control Inst ACIN, Guhausstr 27, A-1040 Vienna, Austria
[2] TU Wien, Human Comp Interact HCI Grp, Inst Visual Comp & Human Ctr Technol, Argentinierstr 8-E193-5, A-1040 Vienna, Austria
[3] Univ Copenhagen, Qual Life Technol Grp, Human Ctr Comp Sect, Emil Holms Kanal 6, DK-2300 Copenhagen, Denmark
基金
欧盟地平线“2020”; 瑞士国家科学基金会;
关键词
Social robots; Human-robot interaction; Robot emotions; Multi-modal interaction; Body language; Believability; FACIAL EXPRESSIONS; AUDIOVISUAL INTEGRATION; CULTURAL SPECIFICITY; PERCEPTION; FACE; UNIVERSALITY; ENGAGEMENT; POSTURES; FEATURES; AGENT;
D O I
10.1007/s12369-019-00524-z
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Humanoid social robots have an increasingly prominent place in today's world. Their acceptance in social and emotional human-robot interaction (HRI) scenarios depends on their ability to convey well recognized and believable emotional expressions to their human users. In this article, we incorporate recent findings from psychology, neuroscience, human-computer interaction, and HRI, to examine how people recognize and respond to emotions displayed by the body and voice of humanoid robots, with a particular emphasis on the effects of incongruence. In a social HRI laboratory experiment, we investigated contextual incongruence (i.e., the conflict situation where a robot's reaction is incongrous with the socio-emotional context of the interaction) and cross-modal incongruence (i.e., the conflict situation where an observer receives incongruous emotional information across the auditory (vocal prosody) and visual (whole-body expressions) modalities). Results showed that both contextual incongruence and cross-modal incongruence confused observers and decreased the likelihood that they accurately recognized the emotional expressions of the robot. This, in turn, gives the impression that the robot is unintelligent or unable to express "empathic" behaviour and leads to profoundly harmful effects on likability and believability. Our findings reinforce the need of proper design of emotional expressions for robots that use several channels to communicate their emotional states in a clear and effective way. We offer recommendations regarding design choices and discuss future research areas in the direction of multimodal HRI.
引用
收藏
页码:555 / 573
页数:19
相关论文
共 69 条
[1]  
Aly A, 2015, IEEE INT C INT ROBOT, P2986, DOI 10.1109/IROS.2015.7353789
[2]  
[Anonymous], STAT METHODS PSYCHOL
[3]  
[Anonymous], 2013, INTERACTIVE STORYTEL, DOI DOI 10.1007/978-3-319-02756-2
[4]  
[Anonymous], P 5 INT C HUM AG INT
[5]  
[Anonymous], 2015, THESIS
[6]  
[Anonymous], 2004, HDB MULTISENSORY PRO
[7]  
[Anonymous], 2015, Southeast-Con 2015
[8]  
[Anonymous], WHAT INFORM DETERMIN
[9]   Angry, disgusted, or afraid? Studies on the malleability of emotion perception [J].
Aviezer, Hillel ;
Hassin, Ran R. ;
Ryan, Jennifer ;
Grady, Cheryl ;
Susskind, Josh ;
Anderson, Adam ;
Moscovitch, Morris ;
Bentin, Shlomo .
PSYCHOLOGICAL SCIENCE, 2008, 19 (07) :724-732
[10]   Context in Emotion Perception [J].
Barrett, Lisa Feldman ;
Mesquita, Batja ;
Gendron, Maria .
CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE, 2011, 20 (05) :286-290