A Systematic Literature Review of User Trust in AI-Enabled Systems: An HCI Perspective

被引:74
作者
Bach, Tita Alissa [1 ]
Khan, Amna [2 ]
Hallock, Harry [1 ]
Beltrao, Gabriela [2 ]
Sousa, Sonia [2 ]
机构
[1] DNV AS, Grp Res & Dev, Hovik, Norway
[2] Tallinn Univ, Sch Digital Technol, Tallinn, Estonia
关键词
ONLINE TRUST; MANAGEMENT;
D O I
10.1080/10447318.2022.2138826
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
User trust in Artificial Intelligence (AI) enabled systems has been increasingly recognized and proven as a key element to fostering adoption. It has been suggested that AI-enabled systems must go beyond technical-centric approaches and towards embracing a more human-centric approach, a core principle of the human-computer interaction (HCI) field. This review aims to provide an overview of the user trust definitions, influencing factors, and measurement methods from 23 empirical studies to gather insight for future technical and design strategies, research, and initiatives to calibrate the user-AI relationship. The findings confirm that there is more than one way to define trust. Selecting the most appropriate trust definition to depict user trust in a specific context should be the focus instead of comparing definitions. User trust in AI-enabled systems is found to be influenced by three main themes, namely socio-ethical considerations, technical and design features, and user characteristics. User characteristics dominate the findings, reinforcing the importance of user involvement from development through to monitoring of AI-enabled systems. Different contexts and various characteristics of both the users and the systems are also found to influence user trust, highlighting the importance of selecting and tailoring features of the system according to the targeted user group's characteristics. Importantly, socio-ethical considerations can pave the way in making sure that the environment where user-AI interactions happen is sufficiently conducive to establish and maintain a trusted relationship. In measuring user trust, surveys are found to be the most common method followed by interviews and focus groups. In conclusion, user trust needs to be addressed directly in every context where AI-enabled systems are being used or discussed. In addition, calibrating the user-AI relationship requires finding the optimal balance that works for not only the user but also the system.
引用
收藏
页码:1251 / 1266
页数:16
相关论文
共 99 条
[1]  
Abebe Rediet, 2018, AI Matters, V4, P27, DOI 10.1145/3284751.3284761
[2]  
[Anonymous], 2002, P C DES INT SYST PRO, DOI DOI 10.1145/778712.778756
[3]  
[Anonymous], 2019, Digital Economy and Society Index Report 2019. P, P6
[4]  
Ashoori M., 2019, arXiv
[5]   Basic concepts and taxonomy of dependable and secure computing [J].
Avizienis, A ;
Laprie, JC ;
Randell, B ;
Landwehr, C .
IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2004, 1 (01) :11-33
[6]  
Banerjee S., 2022, FASHION IND, DOI [https://doi.org/10.1108/9781802626339, DOI 10.1108/9781802626339]
[7]  
Banerjee Satya Shankar, 2021, International Journal of Enterprise Network Management, V12, P165, DOI 10.1504/IJENM.2021.116438
[8]   A qualitative research framework for the design of user-centered displays of explanations for machine learning model predictions in healthcare [J].
Barda, Amie J. ;
Horvat, Christopher M. ;
Hochheiser, Harry .
BMC MEDICAL INFORMATICS AND DECISION MAKING, 2020, 20 (01)
[9]  
Bathaee Y., 2018, Harvard J Law Technol, V31, P889, DOI DOI 10.1177/0038038516645751
[10]  
Bauer P.C., 2019, Clearing the jungle: conceptualizing trust and trustworthiness, DOI [10.2139/ssrn.2325989, DOI 10.2139/SSRN.2325989]