Good Practice for the Development of Autonomous Weapons Ensuring the Art of the Acceptable, Not the Art of the Possible

被引:1
作者
Gillespie, Tony [1 ,2 ]
机构
[1] UCL, London, England
[2] Royal Acad Engn, London, England
关键词
ARTIFICIAL-INTELLIGENCE;
D O I
10.1080/03071847.2020.1865112
中图分类号
D0 [政治学、政治理论];
学科分类号
0302 ; 030201 ;
摘要
Advanced surveillance systems can autonomously identify military targets. A consequent automated decision to attack without human assessment and authorisation of the action will almost certainly be in breach of international law. Separating decisions and actions identifies the role of machine-made decisions and the human ability to assess them and to authorise action. High autonomy levels in a weapon system place new responsibilities on organisations and personnel at all stages of procurement and use. In this article, Tony Gillespie builds on recent UN expert discussions to propose that detailed legal reviews at all procurement stages, including pre-development, are needed to ensure compliance with international law. Similar reviews are also needed for automated systems in the decision-making process.
引用
收藏
页码:58 / 67
页数:10
相关论文
共 23 条
  • [1] Albus James, 2004, AAAI WORKSHOP REPORT
  • [2] Albus James, 2004, P AAAI ACH HUM LEV I
  • [3] Boothby W.H, 2012, THE LAW OF TARGETING
  • [4] Command and Control Research Program and NATO, 2002, NATO COD BEST PRACT
  • [5] DeepMind Safety Team, 2018, Building safe artificial intelligence: specification, robustness, and assurance
  • [6] Dinstein Yoram, 2020, OSLO MANUAL SELECT T, P33
  • [7] Fairley Dick, SYSTEM LIFE CYCLE PR
  • [8] German Federal Foreign Office and Alliance for Multilateralism, 2020, CHAIRS SUMM VIRT BER
  • [9] Artificial Intelligence and International Security: The Long View
    Gill, Amandeep Singh
    [J]. ETHICS & INTERNATIONAL AFFAIRS, 2019, 33 (02) : 169 - 179
  • [10] Gillespie T, 2019, SYSTEMS ENG ETHICAL, P512