Perceived Moral Patiency of Social Robots: Explication and Scale Development

被引:0
作者
Jaime Banks
Nicholas David Bowman
机构
[1] Syracuse University,School of Information Studies
[2] Texas Tech University,College of Media & Communication
[3] Syracuse University,Newhouse School of Public Communications
来源
International Journal of Social Robotics | 2023年 / 15卷
关键词
social robots; moral patiency; scale development; moral foundations theory;
D O I
暂无
中图分类号
学科分类号
摘要
As robots are increasingly integrated into human social spheres, they will be put in situations in which they may be perceived as moral patients—the actual or possible targets of humans’ (im)moral actions by which they may realize some benefit or suffering. However, little is understood about this potential, in part due to a lack of operationalization for measuring humans’ perceptions of machine moral patiency. This paper explicates the notion of perceived moral patiency (PMP) of robots and reports the results of three studies that develop a scale for measuring robot PMP and explore its measurements with relevant social dynamics. We ultimately present an omnibus six-factor scale, with each factor capturing the extent to which people believe a robot deserves a specific kind of moral consideration as specified by moral foundations theory (care, fairness, loyalty, authority, purity, liberty). The omnibus PMP scale’s factor structure is robust across both in-principle and in-context evaluations, and measures contextualized (local) PMP as distinct from heuristic (global) PMP.
引用
收藏
页码:101 / 113
页数:12
相关论文
共 59 条
[1]  
Gunkel DJ(2018)The other question: can and should robots have rights? Ethics Inf Technol 20 87-99
[2]  
Foot P(1967)The problem of abortion and the doctrine of double effect Oxf Rev 5 5-15
[3]  
Gray K(2009)Moral typecasting: divergent perceptions of moral agents and moral patients J Personal Soc Psychol 96 505-520
[4]  
Wegner DM(2019)A perceived moral agency scale: development and validation of a metric for humans and social machines Comput Hum Behav 90 363-371
[5]  
Banks J(2006)When is a robot a moral agent? Int Rev Inform Ethics 6 23-30
[6]  
Sullins JP(2021)Should we treat Teddy Bear 2.0 as a Kantian dog? Four arguments for the indirect moral standing of personal social robots, with implications for thinking about animals and humans Mind Mach 31 337-360
[7]  
Coeckelbergh M(2021)From warranty voids to uprising advocacy: human action and the perceived moral patiency of social robots Front Rob AI 28 670503-640
[8]  
Banks J(2020)Optimus Primed: media cultivation of robot mental models and social judgments Front Rob AI 7 62-213
[9]  
Banks J(2021)User responses to a humanoid robot observed in real life, virtual reality, 3D and 2D Front Psychol 12 633178-1445
[10]  
Mara M(2001)Toward a cognitive theory of literary character: the dynamics of mental-model construction Style 35 607-363