Moral Judgments in the Age of Artificial Intelligence

被引:51
作者
Sullivan, Yulia W. [1 ]
Fosso Wamba, Samuel [2 ]
机构
[1] Baylor Univ, Hankamer Sch Business, One Bear Pl 98005,Foster Campus, Waco, TX 76798 USA
[2] TBS Educ, 1 Pl Alphonse Jourdain, F-31068 Toulouse, France
关键词
Artificial intelligence; Moral judgments; Mind perception; Perceived agency; Perceived experience; Perceived intentional harm; MIND PERCEPTION; ROBOTS; CONSCIOUSNESS; MACHINES; AI; DEHUMANIZATION; ORGANIZATIONS; ENGAGEMENT; PEOPLE; HARM;
D O I
10.1007/s10551-022-05053-w
中图分类号
F [经济];
学科分类号
02 ;
摘要
The current research aims to answer the following question: "who will be held responsible for harm involving an artificial intelligence (AI) system?" Drawing upon the literature on moral judgments, we assert that when people perceive an AI system's action as causing harm to others, they will assign blame to different entity groups involved in an AI's life cycle, including the company, the developer team, and even the AI system itself, especially when such harm is perceived to be intentional. Drawing upon the theory of mind perception, we hypothesized that two dimensions of mind: perceived agency-attributing intention, reasoning, pursuing goals, and communicating to AI, and perceived experience-attributing emotional states, such as feeling pain and pleasure, personality, and consciousness to AI-mediated the relationship between perceived intentional harm and blame judgments toward AI. We also predicted that people are likely to attribute higher mind characteristics to AI when harm is perceived to be directed to humans than when it is perceived to be directed to non-humans. We tested our research model in three experiments. In all experiments, we found that perceived intentional harm led to blame judgments toward AI. In two experiments, we found perceived experience, not agency, mediated the relationship between perceived intentional harm and blame judgments. We also found that companies and developers were held responsible for moral violations involving AI, with developers received the most blame among the entities involved. Our third experiment reconciles the findings by showing that perceived intentional harm directed to a non-human entity did not lead to increased attributions of mind to AI. These findings have implications for theory and practice concerning unethical outcomes and behavior associated with AI use.
引用
收藏
页码:917 / 943
页数:27
相关论文
共 83 条
[1]   Multistakeholder recommendation: Survey and research directions [J].
Abdollahpouri, Himan ;
Adomavicius, Gediminas ;
Burke, Robin ;
Guy, Ido ;
Jannach, Dietmar ;
Kamishima, Toshihiro ;
Krasnodebski, Jan ;
Pizzato, Luiz .
USER MODELING AND USER-ADAPTED INTERACTION, 2020, 30 (01) :127-158
[2]   Users are not the enemy [J].
Adams, A ;
Sasse, MA .
COMMUNICATIONS OF THE ACM, 1999, 42 (12) :41-46
[3]   Intentional Harms Are Worse, Even When They're Not [J].
Ames, Daniel L. ;
Fiske, Susan T. .
PSYCHOLOGICAL SCIENCE, 2013, 24 (09) :1755-1762
[4]  
[Anonymous], 2017, COURTHOUSENEWS
[5]   Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions [J].
Araujo, Theo .
COMPUTERS IN HUMAN BEHAVIOR, 2018, 85 :183-189
[6]  
Arkin RC, 2009, IEEE INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE IN ROBOTICS AND AUTOMATION, P381, DOI 10.1109/CIRA.2009.5423177
[7]   For or against corporate identity? Personification and the problem of moral agency [J].
Ashman, Ian ;
Winstanley, Diana .
JOURNAL OF BUSINESS ETHICS, 2007, 76 (01) :83-95
[8]   Don't Mind Meat? The Denial of Mind to Animals Used for Human Consumption [J].
Bastian, Brock ;
Loughnan, Steve ;
Haslam, Nick ;
Radke, Helena R. M. .
PERSONALITY AND SOCIAL PSYCHOLOGY BULLETIN, 2012, 38 (02) :247-256
[9]   Blaming, praising, and protecting our humanity: The implications of everyday dehumanization for judgments of moral status [J].
Bastian, Brock ;
Laham, Simon M. ;
Wilson, Sam ;
Haslam, Nick ;
Koval, Peter .
BRITISH JOURNAL OF SOCIAL PSYCHOLOGY, 2011, 50 (03) :469-483
[10]   A Normative Approach to Artificial Moral Agency [J].
Behdadi, Dorna ;
Munthe, Christian .
MINDS AND MACHINES, 2020, 30 (02) :195-218