You, Me, and the AI: The Role of Third-Party Human Teammates for Trust Formation Toward AI Teammates

被引:3
作者
Erengin, Turku [1 ]
Briker, Roman [1 ]
de Jong, Simon B. [1 ]
机构
[1] Maastricht Univ, Sch Business & Econ, Maastricht, Netherlands
关键词
artificial intelligence; human-AI teams; social cognitive theory; trust; trustworthiness; SOCIAL COGNITIVE THEORY; PRACTICE RECOMMENDATIONS; INTERPERSONAL-TRUST; ABUSIVE SUPERVISION; JOB-ATTITUDES; AUTOMATION; PROPENSITY; TRUSTWORTHINESS; METAANALYSIS; INFORMATION;
D O I
10.1002/job.2857
中图分类号
F [经济];
学科分类号
02 ;
摘要
As artificial intelligence (AI) becomes increasingly integrated in teams, understanding the factors that drive trust formation between human and AI teammates becomes crucial. Yet, the emergent literature has overlooked the impact of third parties on human-AI teaming. Drawing from social cognitive theory and human-AI teams research, we suggest that how much a human teammate perceives an AI teammate as trustworthy, and engages in trust behaviors toward the AI, determines a focal employee's trust perceptions and behavior toward this AI teammate. Additionally, we propose these effects hinge on an employee's perceptions of trustworthiness and trust in the human teammate. We test these predictions across two studies: (1) an online experiment comprising individuals with work experience that examines perceptions of disembodied AI trustworthiness, and (2) an incentivized observational study that investigates trust behaviors toward an embodied AI. Both studies reveal that a human teammate's perceived trustworthiness of, and trust in, the AI teammate strongly predict the employee's trustworthiness perceptions and behavioral trust in the AI teammate. Furthermore, this relationship vanishes when employees perceive their human teammates as less trustworthy. These results advance our understanding of third-party effects in human-AI trust formation, providing organizations with insights for managing social influences in human-AI teams.
引用
收藏
页数:26
相关论文
共 196 条
[1]   How supervisors set the tone for long hours: Vicarious learning, subordinates' self-motives and the contagion of working hours [J].
Afota, Marie-Colombe ;
Ollier-Malaterre, Ariane ;
Vandenberghe, Christian .
HUMAN RESOURCE MANAGEMENT REVIEW, 2019, 29 (04)
[2]   Best Practice Recommendations for Designing and Implementing Experimental Vignette Methodology Studies [J].
Aguinis, Herman ;
Bradley, Kyle J. .
ORGANIZATIONAL RESEARCH METHODS, 2014, 17 (04) :351-371
[3]  
Aickin Mikel, 2009, Perm J, V13, P80
[4]   THE THEORY OF PLANNED BEHAVIOR [J].
AJZEN, I .
ORGANIZATIONAL BEHAVIOR AND HUMAN DECISION PROCESSES, 1991, 50 (02) :179-211
[5]   Human vs. Automation: Which One Will You Trust More If You Are About to Lose Money? [J].
Al Fahim, Md Abdullah ;
Khan, Mohammad Maifi Hasan ;
Jensen, Theodore ;
Albayram, Yusuf .
INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION, 2023, 39 (12) :2420-2435
[6]   The Effects of Trustworthiness Manipulations on Trustworthiness Perceptions and Risk-Taking Behaviors [J].
Alarcon, Gene M. ;
Capiola, August ;
Lee, Michael A. ;
Jessup, Sarah A. .
DECISION-WASHINGTON, 2022, 9 (04) :388-406
[7]   The role of propensity to trust and the five factor model across the trust process [J].
Alarcon, Gene M. ;
Lyons, Joseph B. ;
Christensen, James C. ;
Bowers, Margaret A. ;
Klosterman, Samantha L. ;
Capiola, August .
JOURNAL OF RESEARCH IN PERSONALITY, 2018, 75 :69-82
[8]   The effect of propensity to trust and familiarity on perceptions of trustworthiness over time [J].
Alarcon, Gene M. ;
Lyons, Joseph B. ;
Christensen, James C. .
PERSONALITY AND INDIVIDUAL DIFFERENCES, 2016, 94 :309-315
[9]   Development and Validation of the System Trustworthiness Scale [J].
Alarcon, Gene M. M. ;
Capiola, August ;
Lee, Michael A. A. ;
Willis, Sasha ;
Hamdan, Izz Aldin ;
Jessup, Sarah A. A. ;
Harris, Krista M. N. .
HUMAN FACTORS, 2024, 66 (07) :1893-1913
[10]  
Anthropic, 2024, Claude 3 Computer Software