Understanding the Limits of Explainable Ethical AI

被引:1
作者
Peterson, Clayton [1 ]
Broersen, Jan [2 ]
机构
[1] Univ Quebec Trois Rivieres, 3351 Bd Forges, Trois Rivieres, PQ G8Z 4M3, Canada
[2] Univ Utrecht, Janskerkhof 13, NL-3512 BL Utrecht, Netherlands
关键词
Ethics of artificial intelligence; ethical pluralism; autonomous ethical agents; automated behavior; automated reasoning; RISK; CHOICE;
D O I
10.1142/S0218213024600017
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Artificially intelligent systems are nowadays presented as systems that should, among other things, be explainable and ethical. In parallel, both in the popular culture and within the scientific literature, there is a tendency to anthropomorphize Artificial Intelligence (AI) and reify intelligent systems as persons. From the perspective of machine ethics and ethical AI, this has resulted in the belief that truly autonomous ethical agents (i.e., machines and algorithms) can be defined, and that machines could, by themselves, behave ethically and perform actions that are justified (explainable) from a normative (ethical) standpoint. Under this assumption, and given that utilities and risks are generally seen as quantifiable, many scholars have seen consequentialism (or utilitarianism) and rational choice theory as likely candidates to be implemented in automated ethical decision procedures, for instance to assess and manage risks as well as maximize expected utility. While some see this implementation as unproblematic, there are important limitations to such attempts that need to be made explicit so that we can properly understand what artificial autonomous ethical agents are, and what they are not. From the perspective of explainable AI, there are value-laden technical choices made during the implementation of automated ethical decision procedures that cannot be explained as decisions made by the system. Building on a recent example from the machine ethics literature, we use computer simulations to study whether autonomous ethical agents can be considered as explainable AI systems. Using these simulations, we argue that technical issues with ethical ramifications leave room for reasonable disagreement even when algorithms are based on ethical and rational foundations such as consequentialism and rational choice theory. By doing so, our aim is to illustrate the limitations of automated behavior and ethical AI and, incidentally, to raise awareness on the limits of so-called autonomous ethical agents.
引用
收藏
页数:24
相关论文
共 78 条
  • [1] Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
    Adadi, Amina
    Berrada, Mohammed
    [J]. IEEE ACCESS, 2018, 6 : 52138 - 52160
  • [2] Anderson M., 2004, AAAI WORKSH AG ORG T
  • [3] Anderson M, 2007, AI MAG, V28, P15
  • [4] Explainable artificial intelligence: an analytical review
    Angelov, Plamen P.
    Soares, Eduardo A.
    Jiang, Richard
    Arnold, Nicholas I.
    Atkinson, Peter M.
    [J]. WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY, 2021, 11 (05)
  • [5] [Anonymous], 2005, Risk Management, DOI DOI 10.1057/PALGRAVE.RM.8240209
  • [6] [Anonymous], 2022, CANADIAN MOTOR VEHIC
  • [7] Aqvist L., 2002, HDB PHILOS LOGIC, V8, P147, DOI [10.1007/978-94-010-0387-2_3, DOI 10.1007/978-94-010-0387-2_3]
  • [8] Audard C., 1999, ANTHOLOGIE HIST CRIT
  • [9] Are there genuine mathematical explanations of physical phenomena?
    Baker, A
    [J]. MIND, 2005, 114 (454) : 223 - 238
  • [10] Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
    Barredo Arrieta, Alejandro
    Diaz-Rodriguez, Natalia
    Del Ser, Javier
    Bennetot, Adrien
    Tabik, Siham
    Barbado, Alberto
    Garcia, Salvador
    Gil-Lopez, Sergio
    Molina, Daniel
    Benjamins, Richard
    Chatila, Raja
    Herrera, Francisco
    [J]. INFORMATION FUSION, 2020, 58 : 82 - 115