A Systematic Literature Review of User Trust in AI-Enabled Systems: An HCI Perspective

被引:77
作者
Bach, Tita Alissa [1 ]
Khan, Amna [2 ]
Hallock, Harry [1 ]
Beltrao, Gabriela [2 ]
Sousa, Sonia [2 ]
机构
[1] DNV AS, Grp Res & Dev, Hovik, Norway
[2] Tallinn Univ, Sch Digital Technol, Tallinn, Estonia
关键词
ONLINE TRUST; MANAGEMENT;
D O I
10.1080/10447318.2022.2138826
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
User trust in Artificial Intelligence (AI) enabled systems has been increasingly recognized and proven as a key element to fostering adoption. It has been suggested that AI-enabled systems must go beyond technical-centric approaches and towards embracing a more human-centric approach, a core principle of the human-computer interaction (HCI) field. This review aims to provide an overview of the user trust definitions, influencing factors, and measurement methods from 23 empirical studies to gather insight for future technical and design strategies, research, and initiatives to calibrate the user-AI relationship. The findings confirm that there is more than one way to define trust. Selecting the most appropriate trust definition to depict user trust in a specific context should be the focus instead of comparing definitions. User trust in AI-enabled systems is found to be influenced by three main themes, namely socio-ethical considerations, technical and design features, and user characteristics. User characteristics dominate the findings, reinforcing the importance of user involvement from development through to monitoring of AI-enabled systems. Different contexts and various characteristics of both the users and the systems are also found to influence user trust, highlighting the importance of selecting and tailoring features of the system according to the targeted user group's characteristics. Importantly, socio-ethical considerations can pave the way in making sure that the environment where user-AI interactions happen is sufficiently conducive to establish and maintain a trusted relationship. In measuring user trust, surveys are found to be the most common method followed by interviews and focus groups. In conclusion, user trust needs to be addressed directly in every context where AI-enabled systems are being used or discussed. In addition, calibrating the user-AI relationship requires finding the optimal balance that works for not only the user but also the system.
引用
收藏
页码:1251 / 1266
页数:16
相关论文
共 99 条
[91]   "Let me explain!": exploring the potential of virtual agents in explainable AI interaction design [J].
Weitz, Katharina ;
Schiller, Dominik ;
Schlagowski, Ruben ;
Huber, Tobias ;
Andre, Elisabeth .
JOURNAL ON MULTIMODAL USER INTERFACES, 2021, 15 (02) :87-98
[92]  
Wunderlich N, 2017, P 38 INT C INF SYST, P1
[93]  
Xu W., 2019, Interactions, V26, P42, DOI [DOI 10.1145/3328485, 10.1145/3328485]
[94]  
Yan Z, 2008, LECT NOTES COMPUT SC, V5060, P455
[95]   Exploring trust of mobile applications based on user behaviors: an empirical study [J].
Yan, Zheng ;
Dong, Yan ;
Niemi, Valtteri ;
Yu, Guoliang .
JOURNAL OF APPLIED SOCIAL PSYCHOLOGY, 2013, 43 (03) :638-659
[96]   Understanding Deep Learning (Still) Requires Rethinking Generalization [J].
Zhang, Chiyuan ;
Bengio, Samy ;
Hardt, Moritz ;
Recht, Benjamin ;
Vinyals, Oriol .
COMMUNICATIONS OF THE ACM, 2021, 64 (03) :107-115
[97]  
Zhang Z.T., 2021, JOINT P ACM IUI 2021
[98]   Effect of AI Explanations on Human Perceptions of Patient-Facing AI-Powered Healthcare Systems [J].
Zhang, Zhan ;
Genc, Yegin ;
Wang, Dakuo ;
Ahsen, Mehmet Eren ;
Fan, Xiangmin .
JOURNAL OF MEDICAL SYSTEMS, 2021, 45 (06)
[99]   Effects of personality traits on user trust in human-machine collaborations [J].
Zhou, Jianlong ;
Luo, Simon ;
Chen, Fang .
JOURNAL ON MULTIMODAL USER INTERFACES, 2020, 14 (04) :387-400