Trustworthy AI: a plea for modest anthropocentrism

被引:0
作者
Nyrup R. [1 ]
机构
[1] Leverhulme Centre for the Future of Intelligence, University of Cambridge, 16 Mill Lane, Cambridge
来源
Asian Journal of Philosophy | / 2卷 / 2期
关键词
Anthropocentrism; Anthropomorphism; Functional norms; Modest anthropocentrism; Social practices; Trustworthy AI;
D O I
10.1007/s44204-023-00096-w
中图分类号
学科分类号
摘要
Simion and Kelp defend a non-anthropocentric account of trustworthy AI, based on the idea that the obligations of AI systems should be sourced in purely functional norms. In this commentary, I highlight some pressing counterexamples to their account, involving AI systems that reliably fulfil their functions but are untrustworthy because those functions are antagonistic to the interests of the trustor. Instead, I outline an alternative account, based on the idea that AI systems should not be considered primarily as tools but as technological participants in social practices. Specifically, I propose to source the obligations of an AI system in the norms that should govern the role it plays within the social practices it participates in, taking into account any changes to the social practices that its participation may bring about. This proposal is anthropocentric insofar as it ascribes obligations to AI systems that are similar to those of human participants in social practices, but only modestly so, as it does not require trustworthy AI to have contentious anthropomorphic capacities (e.g. for consciousness or moral responsibility). © 2023, The Author(s).
引用
收藏
相关论文
共 18 条
[1]  
Barocas S., Selbst A., Big data’s disparate impacts, California Law Review, 104, pp. 671-732, (2016)
[2]  
Chouldechova A., Fair prediction with disparate impact: A study of bias in recidivism prediction instruments, Big Data, 5, 2, pp. 153-163, (2017)
[3]  
Eubanks V., Automating inequality: How high-tech tools profile, police, and punish the poor, (2018)
[4]  
Moritz H., How Big Data is Unfair. Medium, (2014)
[5]  
Haslanger S., What is a social practice?, Royal Institute of Philosophy Supplement, 82, pp. 231-247, (2018)
[6]  
Keeling G., Nyrup R., Explainable machine learning, patient autonomy, and clinical reasoning, The Oxford Handbook of Digital Ethics, (2022)
[7]  
Kleinberg J.S.M., Inherent trade-offs in the fair determination of risk scores, (2016)
[8]  
Latour B., Reassembling the social: An introduction to actor-network theory, (2005)
[9]  
Maung H., The functions of diagnosis in medicine and psychiatry, The Bloomsbury Companion to Philosophy of Psychiatry, pp. 507-526, (2019)
[10]  
O'Neil C., Weapons of math destruction: How big data increases inequality and threatens democracy, (2016)