Moral Uncanny Valley: A Robot's Appearance Moderates How its Decisions are Judged

被引:44
作者
Laakasuo, Michael [1 ]
Palomaki, Jussi [1 ]
Koebis, Nils [2 ]
机构
[1] Univ Helsinki, Helsinki, Finland
[2] Max Planck Inst Human Dev, Ctr Humans & Machines, Berlin, Germany
基金
芬兰科学院;
关键词
Uncanny valley; Moral psychology; Decision-making; AI; RESPONSES; PEOPLE; INHIBITION; EVOLUTION; DILEMMAS;
D O I
10.1007/s12369-020-00738-6
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Artificial intelligence and robotics are rapidly advancing. Humans are increasingly often affected by autonomous machines making choices with moral repercussions. At the same time, classical research in robotics shows that people are adverse to robots that appear eerily human-a phenomenon commonly referred to as the uncanny valley effect. Yet, little is known about how machines' appearances influence how human evaluate their moral choices. Here we integrate the uncanny valley effect into moral psychology. In two experiments we test whether humans evaluate identical moral choices made by robots differently depending on the robots' appearance. Participants evaluated either deontological ("rule based") or utilitarian ("consequence based") moral decisions made by different robots. The results provide first indication that people evaluate moral choices by robots that resemble humans as less moral compared to the same moral choices made by humans or non-human robots: a moral uncanny valley effect. We discuss the implications of our findings for moral psychology, social robotics and AI-safety policy.
引用
收藏
页码:1679 / 1688
页数:10
相关论文
共 79 条
[1]   Social reactions toward people vs. computers: How mere lables shape interactions [J].
Aharoni, E. ;
Fridlund, A. J. .
COMPUTERS IN HUMAN BEHAVIOR, 2007, 23 (05) :2175-2189
[2]  
Allen, 2009, MORAL MACHIENS TEACH, DOI 10.1093/acprof:oso/9780195374049.001.0001
[3]   Toward a Psychology of Human-Animal Relations [J].
Amiot, Catherine E. ;
Bastian, Brock .
PSYCHOLOGICAL BULLETIN, 2015, 141 (01) :6-47
[4]   Evolution and devolution of knowledge: A tale of two biologies [J].
Atran, S ;
Medin, D ;
Ross, N .
JOURNAL OF THE ROYAL ANTHROPOLOGICAL INSTITUTE, 2004, 10 (02) :395-420
[5]   The Moral Machine experiment [J].
Awad, Edmond ;
Dsouza, Sohan ;
Kim, Richard ;
Schulz, Jonathan ;
Henrich, Joseph ;
Shariff, Azim ;
Bonnefon, Jean-Francois ;
Rahwan, Iyad .
NATURE, 2018, 563 (7729) :59-+
[6]   Improving refugee integration through data-driven algorithmic assignment [J].
Bansak, Kirk ;
Ferwerda, Jeremy ;
Hainmueller, Jens ;
Dillon, Andrea ;
Hangartner, Dominik ;
Lawrence, Duncan ;
Weinstein, Jeremy .
SCIENCE, 2018, 359 (6373) :325-328
[7]   False Equivalence: Are Liberals and Conservatives in the United States Equally Biased? [J].
Baron, Jonathan ;
Jost, John T. .
PERSPECTIVES ON PSYCHOLOGICAL SCIENCE, 2019, 14 (02) :292-303
[8]   Measurement Instruments for the Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety of Robots [J].
Bartneck, Christoph ;
Kulic, Dana ;
Croft, Elizabeth ;
Zoghbi, Susana .
INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2009, 1 (01) :71-81
[9]   Holding Robots Responsible: The Elements of Machine Morality [J].
Bigman, Yochanan E. ;
Waytz, Adam ;
Alterovitz, Ron ;
Gray, Kurt .
TRENDS IN COGNITIVE SCIENCES, 2019, 23 (05) :365-368
[10]   People are averse to machines making moral decisions [J].
Bigman, Yochanan E. ;
Gray, Kurt .
COGNITION, 2018, 181 :21-34