People May Punish, Not Blame, Robots

被引:0
作者
Lee, Minha [1 ]
Ruijten, Peter [2 ]
Frank, Lily [3 ]
de Kort, Yvonne [2 ]
IJsselsteijn, Wijnand [2 ]
机构
[1] Eindhoven Univ Technol, Future Everyday, Ind Design, Eindhoven, Netherlands
[2] Eindhoven Univ Technol, Human Technol Interact, Eindhoven, Netherlands
[3] Eindhoven Univ Technol, Philosophy & Eth, Eindhoven, Netherlands
来源
CHI '21: PROCEEDINGS OF THE 2021 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS | 2021年
关键词
Blame; punishment; morality; responsibility gap; retribution gap; retributive justice; robots; human-robot interaction; PERCEPTION; BEHAVIOR; AGENCY;
D O I
10.1145/3411764.3445284
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
As robots may take a greater part in our moral decision-making processes, whether people hold them accountable for moral harm becomes critical to explore. Blame and punishment signify moral accountability, often involving emotions. We quantitatively looked into people's willingness to blame or punish an emotional vs. non-emotional robot that admits to its wrongdoing. Studies 1 and 2 (online video interaction) showed that people may punish a robot due to its lack of perceived emotional capacity than its perceived agency. Study 3 (in the lab) demonstrated that people were neither willing to blame nor punish the robot. Punishing non-emotional robots seems more likely than blaming them, yet punishment towards robots is more likely to arise online than offline. We reflect on if and why victimized humans (and those who care for them) may seek out retributive justice against robot scapegoats when there are no humans to hold accountable for moral harm.
引用
收藏
页数:11
相关论文
共 56 条