The Fundamental Attribution Error in Human-Robot Interaction: An Experimental Investigation on Attributing Responsibility to a Social Robot for Its Pre-Programmed Behavior

被引:10
作者
Horstmann, Aike C. [1 ]
Kraemer, Nicole C. [1 ]
机构
[1] Univ Duisburg Essen, Social Psychol Media & Commun, Duisburg, Germany
关键词
Human-robot interaction; Agency; Autonomy; Attribution theory; Fundamental attribution error; Humanoid robot; Experimental study; AUTOMATIC ACTIVATION; INTRINSIC MOTIVATION; PSYCHOLOGICAL SAFETY; MORAL AGENCY; FEEDBACK; SELF; RECIPROCITY; PERFORMANCE; VALIDATION; COMPUTERS;
D O I
10.1007/s12369-021-00856-9
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Since social robots are rapidly advancing and thus increasingly entering people's everyday environments, interactions with robots also progress. For these interactions to be designed and executed successfully, this study considers insights of attribution theory to explore the circumstances under which people attribute responsibility for the robot's actions to the robot. In an experimental online study with a 2 x 2 x 2 between-subjects design (N = 394), people read a vignette describing the social robot Pepper either as an assistant or a competitor and its feedback, which was either positive or negative during a subsequently executed quiz, to be generated autonomously by the robot or to be pre-programmed by programmers. Results showed that feedback believed to be autonomous leads to more attributed agency, responsibility, and competence to the robot than feedback believed to be pre-programmed. Moreover, the more agency is ascribed to the robot, the better the evaluation of its sociability and the interaction with it. However, only the valence of the feedback affects the evaluation of the robot's sociability and the interaction with it directly, which points to the occurrence of a fundamental attribution error.
引用
收藏
页码:1137 / 1153
页数:17
相关论文
共 78 条
  • [1] Why machine ethics?
    Allen, Colin
    Wallach, Wendell
    Smit, Iva
    [J]. IEEE INTELLIGENT SYSTEMS, 2006, 21 (04) : 12 - 17
  • [2] [Anonymous], 1999, Report Psychologie
  • [3] [Anonymous], 2009, P 4 INT C PERS TECHN
  • [4] Reciprocity and Retaliation in Social Games With Adaptive Agents
    Asher, Derrik E.
    Zaldivar, Andrew
    Barton, Brian
    Brewer, Alyssa A.
    Krichmar, Jeffrey L.
    [J]. IEEE TRANSACTIONS ON AUTONOMOUS MENTAL DEVELOPMENT, 2012, 4 (03) : 226 - 238
  • [5] Asimov I., 1947, LITTLE LOST ROBOT
  • [6] Aula A, 2002, BCS CONF SERIES, P337
  • [7] Innovation is not enough: climates for initiative and psychological safety, process innovations, and firm performance
    Baer, M
    Frese, M
    [J]. JOURNAL OF ORGANIZATIONAL BEHAVIOR, 2003, 24 (01) : 45 - 68
  • [9] Barker C., 2005, CULTURAL STUDIES THE
  • [10] A design-centred framework for social human-robot interaction
    Bartneck, C
    Forlizzi, J
    [J]. RO-MAN 2004: 13TH IEEE INTERNATIONAL WORKSHOP ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, PROCEEDINGS, 2004, : 591 - 594