Using Agent Features to Influence User Trust, Decision Making and Task Outcome during Human-Agent Collaboration

被引:5
作者
Herse, Sarita [1 ]
Vitale, Jonathan [2 ,3 ]
Williams, Mary-Anne [1 ]
机构
[1] Univ New South Wales, Sch Management & Governance, UNSW Business Sch, Sydney, Australia
[2] Univ Technol Sydney, Sch Comp Sci, Sydney, Australia
[3] Univ New England, Sch Comp Sci, Armidale, Australia
关键词
ROBOT; AUTOMATION; STRATEGIES; ALLOCATION; POWER;
D O I
10.1080/10447318.2022.2150691
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Optimal performance of collaborative tasks requires consideration of the interactions between intelligent agents and their human counterparts. The functionality and success of these agents lie in their ability to maintain user trust; with too much or too little trust leading to over-reliance and under-utilisation, respectively. This problem highlights the need for an appropriate trust calibration methodology with an ability to vary user trust and decision making in-task. An online experiment was run to investigate whether stimulus difficulty and the implementation of agent features by a collaborative recommender system interact to influence user perception, trust and decision making. Agent features are changes to the Human-Agent interface and interaction style, and include presentation of a disclaimer message, a request for more information from the user and no additional feature. Signal detection theory is utilised to interpret decision making, with this applied to assess decision making on the task, as well as with the collaborative agent. The results demonstrate that decision change occurs more for hard stimuli, with participants choosing to change their initial decision across all features to follow the agent recommendation. Furthermore, agent features can be utilised to mediate user decision making and trust in-task, though the direction and extent of this influence is dependent on the implemented feature and difficulty of the task. The results emphasise the complexity of user trust in Human-Agent collaboration, highlighting the importance of considering task context in the wider perspective of trust calibration.
引用
收藏
页码:1740 / 1761
页数:22
相关论文
共 84 条
  • [1] Improving Human-Machine Collaboration Through Transparency-based Feedback - Part II: Control Design and Synthesis
    Akash, Kumar
    Reid, Tahira
    Jain, Neera
    [J]. IFAC PAPERSONLINE, 2019, 51 (34): : 322 - 328
  • [2] American Psychological Association, 2020, APA DICT PSYCH YES N
  • [3] American Psychological Association, 2020, APA DICT PSYCH PRACT
  • [4] Guidelines for Human-AI Interaction
    Amershi, Saleema
    Weld, Dan
    Vorvoreanu, Mihaela
    Fourney, Adam
    Nushi, Besmira
    Collisson, Penny
    Suh, Jina
    Iqbal, Shamsi
    Bennett, Paul N.
    Inkpen, Kori
    Teevan, Jaime
    Kikin-Gil, Ruth
    Horvitz, Eric
    [J]. CHI 2019: PROCEEDINGS OF THE 2019 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 2019,
  • [5] An C., 2005, Supply chain management on demand: Strategies, technologies, applications
  • [6] Teaching signal detection theory with pseudoscience
    Anderson, Nicole D.
    [J]. FRONTIERS IN PSYCHOLOGY, 2015, 6
  • [7] [Anonymous], 2017, Engadget
  • [8] [Anonymous], 2022, HUMAN AGENT INTERACT
  • [9] Benbasat I., 2005, J ASSOC INF SYST, V6, P4
  • [10] Trust in AI: why we should be designing for APPROPRIATE reliance
    Benda, Natalie C.
    Novak, Laurie L.
    Reale, Carrie
    Ancker, Jessica S.
    [J]. JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, 2021, 29 (01) : 207 - 212