Reconciling trust and control in the military use of artificial intelligence

被引:0
|
作者
McFarland, Tim [1 ]
机构
[1] Univ Queensland, TC Beirne Sch Law, Brisbane, Australia
来源
INTERNATIONAL JOURNAL OF LAW AND INFORMATION TECHNOLOGY | 2022年 / 30卷 / 04期
关键词
artificial intelligence; law of armed conflict; military technology; international law; trust; control; ROME STATUTE;
D O I
10.1093/ijlit/eaad008
中图分类号
D9 [法律]; DF [法律];
学科分类号
0301 ;
摘要
In regulating military applications of artificial intelligence (AI), the relationship between humans and the AI systems they operate is of central importance. AI developers commonly frame the desired human-AI relationship in terms of 'trust', aiming to make AI systems sufficiently 'trustworthy' for the task at hand and foster appropriate levels of human 'trust' in complex, often inscrutable, AI systems. Meanwhile, in legal and ethical discussions, the challenge is generally framed as ensuring that humans retain 'control' over AI such that responsible operators can reliably guide the behaviour of AI systems as required by legal and other norms. Surprisingly, few have asked whether the paradigms of 'trust' and 'control' are guiding development of the human-AI relationship in the same direction. This paper outlines the nature of trust and control as they relate to regulation of the human-AI relationship and surveys some challenges which arise in regulating the military uptake of AI systems.
引用
收藏
页码:472 / 483
页数:12
相关论文
共 50 条
  • [1] The role of trust in the use of artificial intelligence for chemical risk assessment
    Wassenaar, Pim N. H.
    Minnema, Jordi
    Vriend, Jelle
    Peijnenburg, Willie J. G. M.
    Pennings, Jeroen L. A.
    Kienhuis, Anne
    REGULATORY TOXICOLOGY AND PHARMACOLOGY, 2024, 148
  • [2] Attachment and trust in artificial intelligence
    Gillath, Omri
    Ai, Ting
    Branicky, Michael S.
    Keshmiri, Shawn
    Davison, Robert B.
    Spaulding, Ryan
    COMPUTERS IN HUMAN BEHAVIOR, 2021, 115
  • [3] The effects of personality and locus of control on trust in humans versus artificial intelligence
    Sharan, Navya Nishith
    Romano, Daniela Maria
    HELIYON, 2020, 6 (08)
  • [4] Trust and Success of Artificial Intelligence in Medicine
    Miklavcic, Jonas
    BOGOSLOVNI VESTNIK-THEOLOGICAL QUARTERLY-EPHEMERIDES THEOLOGICAE, 2021, 81 (04): : 935 - 946
  • [5] Can We Trust Artificial Intelligence?
    Christian Budnik
    Philosophy & Technology, 2025, 38 (1)
  • [6] Trust in Artificial Intelligence in Radiotherapy: A Survey
    Heising, Luca M.
    Ou, Carol X. J.
    Verhaegen, Frank
    Wolfs, Cecile J. A.
    Hoebers, Frank
    Jacobs, Maria J. G.
    RADIOTHERAPY AND ONCOLOGY, 2024, 194 : S2857 - S2860
  • [7] SHOULD WE TRUST ARTIFICIAL INTELLIGENCE?
    Sutrop, Margit
    TRAMES-JOURNAL OF THE HUMANITIES AND SOCIAL SCIENCES, 2019, 23 (04): : 499 - 522
  • [8] Transparency and trust in artificial intelligence systems
    Schmidt, Philipp
    Biessmann, Felix
    Teubner, Timm
    JOURNAL OF DECISION SYSTEMS, 2020, 29 (04) : 260 - 278
  • [9] Regulating for trust: Can law establish trust in artificial intelligence?
    Tamo-Larrieux, Aurelia
    Guitton, Clement
    Mayer, Simon
    Lutz, Christoph
    REGULATION & GOVERNANCE, 2024, 18 (03) : 780 - 801
  • [10] The regulation of artificial intelligence and the responsibility of States in its military use
    Rodriguez, Juan Manuel
    REVISTA UNISCI, 2025, (67): : 53 - 86