Investigating the Role of Multi-modal Social Cues in Human-Robot Collaboration in Industrial Settings

被引:4
|
作者
Cao, Hoang-Long [1 ,2 ]
Scholz, Constantin [1 ,3 ]
De Winter, Joris [1 ,2 ]
El Makrini, Ilias [1 ,2 ]
Vanderborght, Bram [1 ,3 ]
机构
[1] Vrije Univ Brussel, BruBot, Brussels, Belgium
[2] Flanders Make, Lommel, Belgium
[3] imec, Leuven, Belgium
基金
欧盟地平线“2020”;
关键词
Collaborative robots; Multi-modal social cues; Godspeed; Acceptance; COMMUNICATION; GESTURES; GAZE;
D O I
10.1007/s12369-023-01018-9
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Expressing social cues through different communication channels plays an important role in mutual understanding, in both human-human and human-robot collaborations. A few studies investigated the effects of zoomorphic and anthropomorphic social cues expressed by industrial robot arms on robot-to-human communication. In this work, we investigate the role of multi-modal social cues by combining the robot's head-like gestures with light and sound modalities in two studies. The first study found that multi-modal social cues have positive effects on people's perception of the robot, perceived enjoyment, and intention to use. The second study found that a combination of human-like gestures with light and/or sound modalities could lead to a higher understandability of the robot's social cues. These findings suggest the use of multi-modal social cues for robots in industrial settings. However, the possible negative impacts when implementing these social cues should be considered e.g. overtrust, and distraction.
引用
收藏
页码:1169 / 1179
页数:11
相关论文
empty
未找到相关数据