Human control of AI systems: from supervision to teaming

被引:0
作者
Andreas Tsamados [1 ]
Luciano Floridi [2 ]
Mariarosaria Taddeo [1 ]
机构
[1] University of Oxford,Oxford Internet Institute
[2] Yale University,Digital Ethics Center
[3] Alan Turing Institute,undefined
来源
AI and Ethics | 2025年 / 5卷 / 2期
关键词
Artificial intelligence; Foundation models; Human control; Human machine teaming; Cooperative AI; Supervisory control; Meaningful human control;
D O I
10.1007/s43681-024-00489-4
中图分类号
学科分类号
摘要
This article reviews two main approaches to human control of AI systems: supervisory human control and human–machine teaming. It explores how each approach defines and guides the operational interplay between human behaviour and system behaviour to ensure that AI systems are effective throughout their deployment. Specifically, the article looks at how the two approaches differ in their conceptual and practical adequacy regarding the control of AI systems based on foundation models––i.e., models trained on vast datasets, exhibiting general capabilities, and producing non-deterministic behaviour. The article focuses on examples from the defence and security domain to highlight practical challenges in terms of human control of automation in general, and AI in particular, and concludes by arguing that approaches to human control are better served by an understanding of control as the product of collaborative agency in a multi-agent system rather than of exclusive human supervision.
引用
收藏
页码:1535 / 1548
页数:13
相关论文
empty
未找到相关数据