Implicit theories of the human mind predict competitive and cooperative responses to AI robots

被引:30
作者
Dang, Jianning [1 ]
Liu, Li [1 ]
机构
[1] Beijing Normal Univ, Fac Psychol, Natl Demonstrat Ctr Expt Psychol Educ, Beijing, Peoples R China
关键词
Human-robot interactions; Competition; Cooperation; Implicit theories; Mind; ACHIEVEMENT GOALS; MOTIVATION; STUDENTS; MODEL; ANTHROPOMORPHISM; ATTRIBUTIONS; UNIQUENESS; COGNITION; BELIEFS; SUPPORT;
D O I
10.1016/j.chb.2022.107300
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Robots designed with artificial intelligence (AI) can be regarded as either humans' competitors or cooperators. Previous research has mainly focused on how robot-related factors, especially robots' mental capacities or mind, influence human-robot interactions. However, inspired by the perspective that robots can inform us about the nature of human mind, we argue that people's implicit theories of the human mind, namely, beliefs about the malleability of the human mind, predict their competitive and cooperative responses to AI robots. Accordingly, among British (Study 1) and Chinese participants (Study 2), we measured implicit theories of the human mind and examined their roles in predicting perceived competitiveness and cooperativeness in and intentions to compete and cooperate with AI robots. We also explored the mediating roles of performance and mastery goals. The results revealed that a malleable theory of the human mind was negatively associated with performance-avoidance goals, which in turn, positively predicted competitive responses to robots. A malleable theory of the human mind was also positively associated with mastery goals, which in turn, positively predicted cooper-ative responses to robots. Further, Chinese participants reported less competitive and more cooperative responses to AI robots than British participants. The present research extends our understanding of human-robot in-teractions and has implications for facilitating the adoption of innovative technology.
引用
收藏
页数:10
相关论文
共 81 条
  • [31] Humanizing chatbots: The effects of visual, identity and conversational cues on humanness perceptions
    Go, Eun
    Sundar, S. Shyam
    [J]. COMPUTERS IN HUMAN BEHAVIOR, 2019, 97 : 304 - 316
  • [32] Dimensions of mind perception
    Gray, Heather M.
    Gray, Kurt
    Wegner, Daniel M.
    [J]. SCIENCE, 2007, 315 (5812) : 619 - 619
  • [33] Feeling robots and human zombies: Mind perception and the uncanny valley
    Gray, Kurt
    Wegner, Daniel M.
    [J]. COGNITION, 2012, 125 (01) : 125 - 130
  • [34] Heerink Marcel., 2006, ROMAN 2006 THE 15 IE, P521, DOI DOI 10.1109/ROMAN.2006.314442
  • [35] Keen to help? Managers' implicit person theories and their subsequent employee coaching
    Heslin, Peter A.
    Vandewalle, Don
    Latham, Gary P.
    [J]. PERSONNEL PSYCHOLOGY, 2006, 59 (04) : 871 - 902
  • [36] Promotion and prevention: Regulatory focus as a motivational principle
    Higgins, ET
    [J]. ADVANCES IN EXPERIMENTAL SOCIAL PSYCHOLOGY, VOL 30, 1998, 30 : 1 - 46
  • [37] Implicit theories, attributions, and coping: A meaning system approach
    Hong, YY
    Chiu, CY
    Dweck, CS
    Lin, DMS
    Wan, W
    [J]. JOURNAL OF PERSONALITY AND SOCIAL PSYCHOLOGY, 1999, 77 (03) : 588 - 599
  • [38] Could a Rising Robot Workforce Make Humans Less Prejudiced?
    Jackson, Joshua Conrad
    Castelo, Noah
    Gray, Kurt
    [J]. AMERICAN PSYCHOLOGIST, 2020, 75 (07) : 969 - 982
  • [39] Adapting to Artificial Intelligence Radiologists and Pathologists as Information Specialists
    Jha, Saurabh
    Topol, Eric J.
    [J]. JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION, 2016, 316 (22): : 2353 - 2354
  • [40] Ego Depletion-Is It All in Your Head? Implicit Theories About Willpower Affect Self-Regulation
    Job, Veronika
    Dweck, Carol S.
    Walton, Gregory M.
    [J]. PSYCHOLOGICAL SCIENCE, 2010, 21 (11) : 1686 - 1693