Meaningful human control: actionable properties for AI system development

被引:0
作者
Luciano Cavalcante Siebert
Maria Luce Lupetti
Evgeni Aizenberg
Niek Beckers
Arkady Zgonnikov
Herman Veluwenkamp
David Abbink
Elisa Giaccardi
Geert-Jan Houben
Catholijn M. Jonker
Jeroen van den Hoven
Deborah Forster
Reginald L. Lagendijk
机构
[1] Delft University of Technology,AiTech Interdisciplinary Research Program on Meaningful Human Control
[2] Delft University of Technology,Faculty of Electrical Engineering, Mathematics and Computer Science
[3] Delft University of Technology,Faculty of Industrial Design Engineering
[4] Delft University of Technology,Faculty of Mechanical, Maritime and Materials Engineering
[5] Delft University of Technology,Faculty of Technology, Policy and Management
来源
AI and Ethics | 2023年 / 3卷 / 1期
关键词
Artificial intelligence; AI ethics; Meaningful human control; Moral responsibility; Socio-technical systems;
D O I
10.1007/s43681-022-00167-3
中图分类号
学科分类号
摘要
How can humans remain in control of artificial intelligence (AI)-based systems designed to perform tasks autonomously? Such systems are increasingly ubiquitous, creating benefits - but also undesirable situations where moral responsibility for their actions cannot be properly attributed to any particular person or group. The concept of meaningful human control has been proposed to address responsibility gaps and mitigate them by establishing conditions that enable a proper attribution of responsibility for humans; however, clear requirements for researchers, designers, and engineers are yet inexistent, making the development of AI-based systems that remain under meaningful human control challenging. In this paper, we address the gap between philosophical theory and engineering practice by identifying, through an iterative process of abductive thinking, four actionable properties for AI-based systems under meaningful human control, which we discuss making use of two applications scenarios: automated vehicles and AI-based hiring. First, a system in which humans and AI algorithms interact should have an explicitly defined domain of morally loaded situations within which the system ought to operate. Second, humans and AI agents within the system should have appropriate and mutually compatible representations. Third, responsibility attributed to a human should be commensurate with that human’s ability and authority to control the system. Fourth, there should be explicit links between the actions of the AI agents and actions of humans who are aware of their moral responsibility. We argue that these four properties will support practically minded professionals to take concrete steps toward designing and engineering for AI systems that facilitate meaningful human control.
引用
收藏
页码:241 / 255
页数:14
相关论文
共 201 条
  • [1] Floridi L(2020)How to design AI for social good: seven essential factors Sci. Eng. Ethics 26 1771-1796
  • [2] Cowls J(2019)The global landscape of AI ethics guidelines Nat. Mach. Intell. 1 389-399
  • [3] King TC(2004)The responsibility gap: ascribing responsibility for the actions of learning automata Ethics Inf. Technol. 6 175-183
  • [4] Taddeo M(2020)A research agenda for hybrid intelligence: augmenting human intellect with collaborative, adaptive, responsible, and explainable artificial intelligence Computer 53 18-28
  • [5] Jobin A(2013)The seven deadly myths of “autonomous systems” IEEE Intell. Syst. 28 54-61
  • [6] Ienca M(2020)Meaningful human control as reason-responsiveness: the case of dual-mode vehicles Ethics Inf. Technol. 22 103-115
  • [7] Vayena E(2013)Abrupt rise of new machine ecology beyond human response time Sci. Rep. 3 2627-39
  • [8] Matthias A(2017)Observing algorithmic marketplaces in-the-wild ACM SIGecom Exchang 15 34-12
  • [9] Akata Z(2019)The Boeing 737 MAX saga: lessons for software organizations Softw. Qua. Profession. 21 4-29
  • [10] Balliet D(2013)Discrimination in online ad delivery Queue 11 10-114