A Conceptual Model of Trust, Perceived Risk, and Reliance on AI Decision Aids

被引:28
作者
Solberg, Elizabeth [1 ]
Kaarstad, Magnhild [1 ]
Eitrheim, Maren H. Ro [1 ]
Bisio, Rossella [2 ]
Reegard, Kine [1 ]
Bloch, Marten [2 ]
机构
[1] Inst Energy Technol, Dept Human Ctr Digitalizat, Os Alle 5, N-1777 Halden, Norway
[2] Inst Energy Technol, Dept Humans & Automat, Halden, Norway
关键词
trust; perceived risk; reliance; artificial intelligence; AI decision aids; EMPIRICAL-EVIDENCE; INTEGRATIVE MODEL; AUTOMATION; PERFORMANCE; ACCEPTANCE; ORGANIZATIONS; PERSPECTIVES; INFORMATION; COOPERATION; EXPERIENCE;
D O I
10.1177/10596011221081238
中图分类号
B849 [应用心理学];
学科分类号
040203 ;
摘要
There is increasing interest in the use of artificial intelligence (AI) to improve organizational decision-making. However, research indicates that people's trust in and choice to rely on "AI decision aids" can be tenuous. In the present paper, we connect research on trust in AI with Mayer, Davis, and Schoorman's (1995) model of organizational trust to elaborate a conceptual model of trust, perceived risk, and reliance on AI decision aids at work. Drawing from the trust in technology, trust in automation, and decision support systems literatures, we redefine central concepts in Mayer et al.'s (1995) model, expand the model to include new, relevant constructs (like perceived control over an AI decision aid), and refine propositions about the relationships expected in this context. The conceptual model put forward presents a framework that can help researchers studying trust in and reliance on AI decision aids develop their research models, build systematically on each other's research, and contribute to a more cohesive understanding of the phenomenon. Our paper concludes with five next steps to take research on the topic forward.
引用
收藏
页码:187 / 222
页数:36
相关论文
共 83 条
[1]  
Alan A, 2014, AAMAS'14: PROCEEDINGS OF THE 2014 INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS & MULTIAGENT SYSTEMS, P965
[2]  
[Anonymous], 2017, P 38 INT C INFORM SY
[3]   The importance of the assurance that "humans are still in the decision loop" for public trust in artificial intelligence: Evidence from an online experiment [J].
Aoki, Naomi .
COMPUTERS IN HUMAN BEHAVIOR, 2021, 114
[4]  
Arrabito, 2013, P HUMAN FACTORS ERGO, V57, P374, DOI [10.1177/1541931213571081, DOI 10.1177/1541931213571081]
[5]  
Bahner J. Elin, 2008, Proceedings of the Human Factors and Ergonomics Society. 52nd Annual Meeting, P1330
[6]   Expanding the Technology Acceptance Model with the Inclusion of Trust, Social Influence, and Health Valuation to Determine the Predictors of German Users' Willingness to Continue using a Fitness App: A Structural Equation Modeling Approach [J].
Beldad, Ardion D. ;
Hegner, Sabrina M. .
INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION, 2018, 34 (09) :882-893
[7]   The influence of task load and automation trust on deception detection [J].
Biros, DP ;
Daly, M ;
Gunsch, G .
GROUP DECISION AND NEGOTIATION, 2004, 13 (02) :173-189
[8]  
Blau P. M., 1968, INT ENCY SOCIAL SCI, V7, P452, DOI [DOI 10.1016/0049-089X(72)90055-5, 10.33206/mjss.640929, DOI 10.33206/MJSS.640929]
[9]   An Analysis of the Interaction Between Intelligent Software Agents and Human Users [J].
Burr, Christopher ;
Cristianini, Nello ;
Ladyman, James .
MINDS AND MACHINES, 2018, 28 (04) :735-774
[10]   Trust and the Compliance-Reliance Paradigm: The Effects of Risk, Error Bias, and Reliability on Trust and Dependence [J].
Chancey, Eric T. ;
Bliss, James P. ;
Yamani, Yusuke ;
Handley, Holly A. H. .
HUMAN FACTORS, 2017, 59 (03) :333-345