Artificial Moral Agents Within an Ethos of AI4SG

被引:0
作者
Mabaso B.A. [1 ]
机构
[1] University of Pretoria, Pretoria
关键词
AI4SG; Artificial moral agency; Exemplarism; Machine ethics;
D O I
10.1007/s13347-020-00400-z
中图分类号
学科分类号
摘要
As artificial intelligence (AI) continues to proliferate into every area of modern life, there is no doubt that society has to think deeply about the potential impact, whether negative or positive, that it will have. Whilst scholars recognise that AI can usher in a new era of personal, social and economic prosperity, they also warn of the potential for it to be misused towards the detriment of society. Deliberate strategies are therefore required to ensure that AI can be safely integrated into society in a manner that would maximise the good for as many people as possible, whilst minimising the bad. One of the most urgent societal expectations of artificial agents is the need for them to behave in a manner that is morally relevant, i.e. to become artificial moral agents (AMAs). In this article, I will argue that exemplarism, an ethical theory based on virtue ethics, can be employed in the building of computationally rational AMAs with weak machine ethics. I further argue that three features of exemplarism, namely grounding in moral exemplars, meeting community expectations and practical simplicity, are crucial to its uniqueness and suitability for application in building AMAs that fit the ethos of AI4SG. © 2020, Springer Nature B.V.
引用
收藏
页码:7 / 21
页数:14
相关论文
共 51 条
  • [31] Liao S.M., The basis of human moral status, Journal of Moral Philosophy, 7, 2, pp. 1-31, (2010)
  • [32] Mayo M.J., Symbol grounding and its implications for artificial intelligence, In Proceedings of the 26Th Australasian Computer Science Conference, 16, pp. 55-60, (2003)
  • [33] Miller F.D., Aristotle on rationality in action, The Review of Metaphysics, 37, 3, pp. 499-520, (1984)
  • [34] Moor J.H., The nature, importance, and difficulty of machine ethics, IEEE intelligent systems, 21, 4, pp. 18-21, (2006)
  • [35] Parthemore J., Whitby B., What makes any agent a moral agent? Reflections on machine consciousness and moral agency, International Journal of Machine Consciousness, 5, 2, pp. 105-129, (2013)
  • [36] Parthemore J., Whitby B., Moral agency, moral responsibility, and artifacts: what existing artifacts fail to achieve (and why), and why they, nevertheless, can (and do!) make moral claims upon us, International Journal of Machine Consciousness, 6, 2, pp. 141-161, (2014)
  • [37] Peterson M., An introduction to decision theory, Cambridge Introductions to Philosophy, (2009)
  • [38] Pontier M., Hoorn J., Toward Machines that Behave Ethically Better than Humans Do, (2012)
  • [39] Prasad M., Social choice and the value alignment problem, Artificial Intelligence Safety and Security, pp. 291-314, (2018)
  • [40] Rottschaefer W.A., Naturalizing ethics: the biology and psychology of moral agency, Zygon®