Human-Swarm-Teaming Transparency and Trust Architecture

被引:23
|
作者
Hepworth, Adam J. [1 ]
Baxter, Daniel P. [1 ]
Hussein, Aya [1 ]
Yaxley, Kate J. [1 ]
Debie, Essam [1 ]
Abbass, Hussein A. [1 ]
机构
[1] Univ New South Wales, Sch Engn & Informat Technol, Canberra, ACT 2612, Australia
关键词
Artificial intelligence; explainability; human-swarm teaming (HST); interpretability; predictability; swarm shepherding; transparency; SITUATION AWARENESS; AGENT TRANSPARENCY; AUTOMATION; AUTONOMY; INTELLIGENCE; ALLOCATION; MODELS;
D O I
10.1109/JAS.2020.1003545
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Transparency is a widely used but poorly defined term within the explainable artificial intelligence literature. This is due, in part, to the lack of an agreed definition and the overlap between the connected - sometimes used synonymously - concepts of interpretability and explainability. We assert that transparency is the overarching concept, with the tenets of interpretability, explainability, and predictability subordinate. We draw on a portfolio of definitions for each of these distinct concepts to propose a human-swarm-teaming transparency and trust architecture (HST3-Architecture). The architecture reinforces transparency as a key contributor towards situation awareness, and consequently as an enabler for effective trustworthy human-swarm teaming (HST).
引用
收藏
页码:1281 / 1295
页数:15
相关论文
共 50 条
  • [41] How transparency modulates trust in artificial intelligence
    Zerilli, John
    Bhatt, Umang
    Weller, Adrian
    PATTERNS, 2022, 3 (04):
  • [42] Supporting Human-AI Teams:Transparency, explainability, and situation awareness
    Endsley, Mica R.
    COMPUTERS IN HUMAN BEHAVIOR, 2023, 140
  • [43] Determinants of system transparency and its influence on trust in and reliance on unmanned robotic systems`
    Ososky, Scott
    Sanders, Tracy
    Jentsch, Florian
    Hancock, Peter
    Chen, Jessie Y. C.
    UNMANNED SYSTEMS TECHNOLOGY XVI, 2014, 9084
  • [44] Effects of Automation Transparency on Trust: Evaluating HMI in the Context of Fully Autonomous Driving
    Li, Jue
    He, Yuxi
    Yin, Shuo
    Liu, Long
    PROCEEDINGS OF THE 15TH INTERNATIONAL CONFERENCE ON AUTOMOTIVE USER INTERFACES AND INTERACTIVE VEHICULAR APPLICATIONS, AUTOMOTIVEUI 2023, 2023, : 311 - 321
  • [45] Human-agent teaming and trust calibration: a theoretical framework, configurable testbed, empirical illustration, and implications for the development of adaptive systems
    Bobko, Philip
    Hirshfield, Leanne
    Eloy, Lucca
    Spencer, Cara
    Doherty, Emily
    Driscoll, Jack
    Obolsky, Hannah
    THEORETICAL ISSUES IN ERGONOMICS SCIENCE, 2023, 24 (03) : 310 - 334
  • [46] Human-collective visualization transparency
    Roundtree, Karina A.
    Cody, Jason R.
    Leaf, Jennifer
    Demirel, H. Onan
    Adams, Julie A.
    SWARM INTELLIGENCE, 2021, 15 (03) : 237 - 286
  • [47] Human Trust-Based Feedback Control: Dynamically Varying Automation Transparency to Optimize Human-Machine Interactions
    Akash, Kumar
    McMahon, Griffon
    Reid, Tahira
    Jain, Neera
    IEEE CONTROL SYSTEMS MAGAZINE, 2020, 40 (06): : 98 - 116
  • [48] Transparency for Trust in Government: How Effective is Formal Transparency?
    Cucciniello, Maria
    Nasi, Greta
    INTERNATIONAL JOURNAL OF PUBLIC ADMINISTRATION, 2014, 37 (13) : 911 - 921
  • [49] In AI We Trust? Effects of Agency Locus and Transparency on Uncertainty Reduction in Human-AI Interaction
    Liu, Bingjie
    JOURNAL OF COMPUTER-MEDIATED COMMUNICATION, 2021, 26 (06): : 384 - 402
  • [50] Transparency rights, technology, and trust
    John Elia
    Ethics and Information Technology, 2009, 11 : 145 - 153