Human-Swarm-Teaming Transparency and Trust Architecture

被引:23
|
作者
Hepworth, Adam J. [1 ]
Baxter, Daniel P. [1 ]
Hussein, Aya [1 ]
Yaxley, Kate J. [1 ]
Debie, Essam [1 ]
Abbass, Hussein A. [1 ]
机构
[1] Univ New South Wales, Sch Engn & Informat Technol, Canberra, ACT 2612, Australia
关键词
Artificial intelligence; explainability; human-swarm teaming (HST); interpretability; predictability; swarm shepherding; transparency; SITUATION AWARENESS; AGENT TRANSPARENCY; AUTOMATION; AUTONOMY; INTELLIGENCE; ALLOCATION; MODELS;
D O I
10.1109/JAS.2020.1003545
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Transparency is a widely used but poorly defined term within the explainable artificial intelligence literature. This is due, in part, to the lack of an agreed definition and the overlap between the connected - sometimes used synonymously - concepts of interpretability and explainability. We assert that transparency is the overarching concept, with the tenets of interpretability, explainability, and predictability subordinate. We draw on a portfolio of definitions for each of these distinct concepts to propose a human-swarm-teaming transparency and trust architecture (HST3-Architecture). The architecture reinforces transparency as a key contributor towards situation awareness, and consequently as an enabler for effective trustworthy human-swarm teaming (HST).
引用
收藏
页码:1281 / 1295
页数:15
相关论文
共 50 条
  • [31] The Impact of Transparency on Driver Trust and Reliance in Highly Automated Driving: Presenting Appropriate Transparency in Automotive HMI
    Li, Jue
    Liu, Jiawen
    Wang, Xiaoshan
    Liu, Long
    APPLIED SCIENCES-BASEL, 2024, 14 (08):
  • [32] The Effect of Asset Degradation on Trust in Swarms: A Reexamination of System-Wide Trust in Human-Swarm Interaction
    Capiola, August
    Hamdan, Izz Aldin
    Lyons, Joseph B. B.
    Lewis, Michael
    Alarcon, Gene M. M.
    Sycara, Katia
    HUMAN FACTORS, 2024, 66 (05) : 1475 - 1489
  • [33] Transparency and trust in artificial intelligence systems
    Schmidt, Philipp
    Biessmann, Felix
    Teubner, Timm
    JOURNAL OF DECISION SYSTEMS, 2020, 29 (04) : 260 - 278
  • [34] Transparency in Autonomous Teammates: Intention to Support as Teaming Information
    Panganiban, April Rose
    Matthews, Gerald
    Long, Michael D.
    JOURNAL OF COGNITIVE ENGINEERING AND DECISION MAKING, 2020, 14 (02) : 174 - 190
  • [35] Modeling and Predicting Trust Dynamics in Human-Robot Teaming: A Bayesian Inference Approach
    Guo, Yaohui
    Yang, X. Jessie
    INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2021, 13 (08) : 1899 - 1909
  • [36] A risk-based trust framework for assuring the humans in human-machine teaming
    Assaad, Zena
    PROCEEDINGS OF THE SECOND INTERNATIONAL SYMPOSIUM ON TRUSTWORTHY AUTONOMOUS SYSTEMS, TAS 2024, 2024,
  • [37] Towards a Reference Software Architecture for Human-Al Teaming in Smart Manufacturing
    Haindl, Philipp
    Buchgeher, Georg
    Khan, Maqbool
    Moser, Bernhard
    2022 ACM/IEEE 44TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING: NEW IDEAS AND EMERGING RESULTS (ICSE-NIER 2022), 2022, : 96 - 100
  • [38] Bias, Explainability, Transparency, and Trust for AI-Enabled Military Systems
    Pace, Teresa
    Ranesb, Bryan
    ASSURANCE AND SECURITY FOR AI-ENABLED SYSTEMS, 2024, 13054
  • [39] The Influence of Modality and Transparency on Trust in Human-Robot Interaction
    Sanders, Tracy L.
    Wixon, Tarita
    Schafer, K. Elizabeth
    Chen, Jessie Y. C.
    Hancock, P. A.
    2014 IEEE INTERNATIONAL INTER-DISCIPLINARY CONFERENCE ON COGNITIVE METHODS IN SITUATION AWARENESS AND DECISION SUPPORT (COGSIMA), 2014, : 156 - 159
  • [40] Adaptive Aiding of Human-Robot Teaming: Effects of Imperfect Automation on Performance, Trust, and Workload
    de Visser, Ewart
    Parasuraman, Raja
    JOURNAL OF COGNITIVE ENGINEERING AND DECISION MAKING, 2011, 5 (02) : 209 - 231