Converging Measures and an Emergent Model: A Meta-Analysis of Human-Machine Trust Questionnaires

被引:1
作者
Razin, Yosef s. [1 ]
Feign, Karen m. [1 ]
机构
[1] Georgia Inst Technol, Sch Aerosp Engn, Atlanta, GA 30332 USA
关键词
Human-Robot Trust; Shared Mental Models; ANTHROPOMORPHISM INCREASES TRUST; PATERNALISTIC LEADERSHIP; INDIVIDUAL-DIFFERENCES; NEGATIVE ATTITUDES; SCALE DEVELOPMENT; AUTOMATION; PERFORMANCE; VALIDATION; ACCEPTANCE; CONFIDENCE;
D O I
10.1145/3677614
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Trust is crucial for technological acceptance, continued usage, and teamwork. However, human-robot trust, and human-machine trust more generally, suffer from terminological disagreement and construct proliferation. By comparing, mapping, and analyzing well-constructed trust survey instruments, this work uncovers a consensus structure of trust in human-machine interaction. To do so, we identify the most frequently cited and best-validated human-machine and human-robot trust questionnaires as well as the best-established factors that form the dimensions and antecedents of such trust. To reduce both confusion and construct proliferation, we provide a detailed mapping of terminology between questionnaires. Furthermore, we perform a meta-analysis of the regression models which emerged from the experiments that employed multi-factorial survey instruments. Based on this meta-analysis, we provide the most complete, experimentally validated model of human-machine and human-robot trust to date. This convergent model establishes an integrated framework for future research. It determines the current boundaries of trust measurement and where further investigation and validation are necessary. We close by discussing how to choose an appropriate trust survey instrument and how to design for trust. By identifying the internal workings of trust, a more complete basis for measuring trust is developed that is widely applicable.
引用
收藏
页数:41
相关论文
共 185 条
[91]  
Malle B. F., 2021, TRUST HUMAN ROBOT IN, P3, DOI [10.1016/b978-0-12-819472-0.00001-0, DOI 10.1016/B978-0-12-819472-0.00001-0, 10.1016/B978-0-12-819472-0.00001-0]
[92]   People respond better to robots than computer tablets delivering healthcare instructions [J].
Mann, Jordan A. ;
MacDonald, Bruce A. ;
Kuo, I. -Han ;
Li, Xingyan ;
Broadbent, Elizabeth .
COMPUTERS IN HUMAN BEHAVIOR, 2015, 43 :112-117
[93]   The role of trust in information science and technology [J].
Marsh, S ;
Dibben, MR .
ANNUAL REVIEW OF INFORMATION SCIENCE AND TECHNOLOGY, 2003, 37 :465-498
[94]   The content analysis of media frames: Toward improving reliability and validity [J].
Matthes, Joerg ;
Kohring, Matthias .
JOURNAL OF COMMUNICATION, 2008, 58 (02) :258-279
[95]  
MAYER RC, 1995, ACAD MANAGE REV, V20, P709, DOI 10.2307/258792
[96]   The effect of the performance appraisal system on trust for management: A field quasi-experiment [J].
Mayer, RC ;
Davis, JH .
JOURNAL OF APPLIED PSYCHOLOGY, 1999, 84 (01) :123-136
[97]   An Evaluation of Shared Mental Models and Mutual Trust on General Medical Units: Implications for Collaboration, Teamwork, and Patient Safety [J].
McComb, Sara A. ;
Lemaster, Matthew ;
Henneman, Elizabeth A. ;
Hinchey, Kevin T. .
JOURNAL OF PATIENT SAFETY, 2017, 13 (04) :237-242
[98]  
McEvily B., 2011, J TRUST RES, V1, P23, DOI [10.1080%2F21515581.2011.552424, DOI 10.1080/21515581.2011.552424, https://doi.org/10.1080/21515581.2011.552424]
[99]  
McKnight D.H., 2001, AMCIS 2001 P, P171
[100]  
McKnight D. Harrison, 2009, DIGIT 2009 P, V10