The Impact of Training on Human-Autonomy Team Communications and Trust Calibration

被引:15
作者
Johnson, Craig J. [1 ]
Demir, Mustafa [1 ]
McNeese, Nathan J. [2 ]
Gorman, Jamie C. [3 ]
Wolff, Alexandra T. [1 ]
Cooke, Nancy J. [1 ]
机构
[1] Arizona State Univ, Tempe, AZ 85212 USA
[2] Clemson Univ, Clemson, SC 29631 USA
[3] Georgia Inst Technol, Atlanta, GA 30332 USA
关键词
human-agent teaming; command and control; collaboration; intelligent systems; artificial intelligence; AUTOMATION; PERFORMANCE; METAANALYSIS; METHODOLOGY; ENTRAINMENT; ADAPTATION; MANAGEMENT; POWER;
D O I
10.1177/00187208211047323
中图分类号
B84 [心理学]; C [社会科学总论]; Q98 [人类学];
学科分类号
03 ; 0303 ; 030303 ; 04 ; 0402 ;
摘要
Objective This work examines two human-autonomy team (HAT) training approaches that target communication and trust calibration to improve team effectiveness under degraded conditions. Background Human-autonomy teaming presents challenges to teamwork, some of which may be addressed through training. Factors vital to HAT performance include communication and calibrated trust. Method Thirty teams of three, including one confederate acting as an autonomous agent, received either entrainment-based coordination training, trust calibration training, or control training before executing a series of missions operating a simulated remotely piloted aircraft. Automation and autonomy failures simulating degraded conditions were injected during missions, and measures of team communication, trust, and task efficiency were collected. Results Teams receiving coordination training had higher communication anticipation ratios, took photos of targets faster, and overcame more autonomy failures. Although autonomy failures were introduced in all conditions, teams receiving the calibration training reported that their overall trust in the agent was more robust over time. However, they did not perform better than the control condition. Conclusions Training based on entrainment of communications, wherein introduction of timely information exchange through one team member has lasting effects throughout the team, was positively associated with improvements in HAT communications and performance under degraded conditions. Training that emphasized the shortcomings of the autonomous agent appeared to calibrate expectations and maintain trust. Applications Team training that includes an autonomous agent that models effective information exchange may positively impact team communication and coordination. Training that emphasizes the limitations of an autonomous agent may help calibrate trust.
引用
收藏
页码:1554 / 1570
页数:17
相关论文
共 50 条
  • [31] Effects of Agent Transparency on Human-Autonomy Teaming Effectiveness
    Chen, Jessie Y. C.
    Barnes, Michael J.
    Selkowitz, Anthony R.
    Stowers, Kimberly
    [J]. 2016 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2016, : 1838 - 1843
  • [32] Teaming With a Synthetic Teammate: Insights into Human-Autonomy Teaming
    McNeese, Nathan J.
    Demir, Mustafa
    Cooke, Nancy J.
    Myers, Christopher
    [J]. HUMAN FACTORS, 2018, 60 (02) : 262 - 273
  • [33] Prospects for Augmenting Team Interactions with Real-Time Coordination-Based Measures in Human-Autonomy Teams
    Wiltshire, Travis J.
    van Eijndhoven, Kyana
    Halgas, Elwira
    Gevers, Josette M. P.
    [J]. TOPICS IN COGNITIVE SCIENCE, 2024, 16 (03) : 391 - 429
  • [34] Effects of agent transparency and situation criticality upon human-autonomy trust and risk perception in decision-making
    Simon, Loick
    Rauffet, Philippe
    Guerin, Clement
    [J]. COGNITION TECHNOLOGY & WORK, 2024,
  • [35] Interaction Paradigms: from Human-Human Teaming to Human-Autonomy Teaming
    Tokadli, Gizliz
    Dorneich, Michael C.
    [J]. 2019 IEEE/AIAA 38TH DIGITAL AVIONICS SYSTEMS CONFERENCE (DASC), 2019,
  • [36] Validating metrics for reward alignment in human-autonomy teaming
    Sanneman, Lindsay
    Shah, Julie A.
    [J]. COMPUTERS IN HUMAN BEHAVIOR, 2023, 146
  • [37] A Design Pattern for Working Agreements in Human-Autonomy Teaming
    Gutzwiller, Robert S.
    Espinosa, Sarah H.
    Kenny, Caitlin
    Lange, Douglas S.
    [J]. ADVANCES IN HUMAN FACTORS IN SIMULATION AND MODELING (AHFE 2017), 2018, 591 : 12 - 24
  • [38] Trusting machine intelligence: artificial intelligence and human-autonomy teaming in military operations
    Mayer, Michael
    [J]. DEFENCE AND SECURITY ANALYSIS, 2023, 39 (04): : 521 - 538
  • [39] A risk model for autonomous marine systems and operation focusing on human-autonomy collaboration
    Thieme, Christoph Alexander
    Utne, Ingrid Bouwer
    [J]. PROCEEDINGS OF THE INSTITUTION OF MECHANICAL ENGINEERS PART O-JOURNAL OF RISK AND RELIABILITY, 2017, 231 (04) : 446 - 464
  • [40] The forgotten teammate: Considering the labor perspective in human-autonomy teams
    Begerowski, Sydney R.
    Hedrick, Katelyn N.
    Waldherr, Flanagan
    Mears, Laine
    Shuffler, Marissa L.
    [J]. COMPUTERS IN HUMAN BEHAVIOR, 2023, 145