On the Social-Relational Moral Standing of Al: An Empirical Study Using Al-Generated Art

被引:8
作者
Lima, Gabriel [1 ,2 ]
Zhunis, Assem [1 ,2 ]
Manovich, Lev [3 ]
Cha, Meeyoung [1 ,2 ]
机构
[1] Korea Adv Inst Sci & Technol, Sch Comp, Daejeon, South Korea
[2] Inst for Basic Sci Korea, Data Sci Grp, Daejeon, South Korea
[3] CUNY, Grad Ctr, New York, NY 10016 USA
关键词
artificial intelligence; moral standing; moral status; agency; experience; patiency; art; rights; MACHINES;
D O I
10.3389/frobt.2021.719944
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
The moral standing of robots and artificial intelligence (Al) systems has become a widely debated topic by normative research. This discussion, however, has primarily focused on those systems developed for social functions, e.g., social robots. Given the increasing interdependence of society with nonsocial machines, examining how existing normative claims could be extended to specific disrupted sectors, such as the art industry, has become imperative. Inspired by the proposals to ground machines' moral status on social relations advanced by Gunkel and Coeckelbergh, this research presents online experiments (Sigma(N) = 448) that test whether and how interacting with Al-generated art affects the perceived moral standing of its creator, i.e., the Al-generative system. Our results indicate that assessing an Al system's lack of mind could influence how people subsequently evaluate Al-generated art. We also find that the overvaluation of Al-generated images could negatively affect their creator's perceived agency. Our experiments, however, did not suggest that interacting with Al-generated art has any significant effect on the perceived moral standing of the machine. These findings reveal that social-relational approaches to Al rights could be intertwined with property-based theses of moral standing. We shed light on how empirical studies can contribute to the Al and robot rights debate by revealing the public perception of this issue.
引用
收藏
页数:13
相关论文
共 66 条
[1]  
Abbott R, 2020, REASONABLE ROBOT: ARTIFICIAL INTELLIGENCE AND THE LAW, P1, DOI 10.1017/9781108631761
[2]  
[Anonymous], 2017, IEEE I CONF COMP VIS, DOI DOI 10.1109/ICCV.2017.244
[3]  
[Anonymous], 1986, Handbook of Theory and Research for the Sociology of Education
[4]  
ASARO Peter M., 2016, AAAI Spring Symposium Series, Center for Internet and Society, Stanford Law School, P190
[5]  
Baylies P, 2020, ADAPTED STYLEGAN2 GI
[6]  
Becker H.S., 2008, ART WORLDS UPDATED E
[7]   On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? [J].
Bender, Emily M. ;
Gebru, Timnit ;
McMillan-Major, Angelina ;
Shmitchell, Shmargaret .
PROCEEDINGS OF THE 2021 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2021, 2021, :610-623
[8]  
Bernstein M., 1998, MORAL CONSIDERABILIT
[9]   People are averse to machines making moral decisions [J].
Bigman, Yochanan E. ;
Gray, Kurt .
COGNITION, 2018, 181 :21-34
[10]  
Birhane A., 2020, P AAAI ACM C ETH SOC, P207