User Trust and Understanding of Explainable AI: Exploring Algorithm Visualisations and User Biases

被引:14
作者
Branley-Bell, Dawn [1 ]
Whitworth, Rebecca [2 ]
Coventry, Lynne [1 ]
机构
[1] Northumbria Univ, Newcastle Upon Tyne NE1 8ST, Tyne & Wear, England
[2] The Catalyst, Red Hat, Newcastle Upon Tyne NE4 5TG, Tyne & Wear, England
来源
HUMAN-COMPUTER INTERACTION. HUMAN VALUES AND QUALITY OF LIFE, HCI 2020, PT III | 2020年 / 12183卷
基金
英国工程与自然科学研究理事会;
关键词
Explainable AI; Artificial intelligence; Machine Learning; Health; Trust; Understanding; Healthcare; Medical diagnoses; Cognitive biases;
D O I
10.1007/978-3-030-49065-2_27
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Artificial intelligence (AI) is increasingly being integrated into different areas of our lives. AI has the potential to increase productivity and relieve workload on staff in high-pressure jobs such as healthcare. However, most AI healthcare tools have failed. For AI to be effective, it is vital that users can understand how the system is processing data. Explainable AI (XAI) moves away from the traditional 'black box' approach, aiming to make the processes behind the system more transparent. This experimental study uses real healthcare data - and combines a computer science and psychological approach - to investigate user trust and understanding of three popular XAI algorithms (Decision Trees, Logistic Regression and Neural Networks). The results question the contribution of understanding towards user trust; Suggesting that understanding and explainability are not the only factors contributing to trust in AI. Users also show biases in trust and understanding - with a particular bias towards malignant results. This raises important issues around how humans can be encouraged to make more accurate judgements when using XAI systems. These findings have implications in relation to ethics, future XAI design, healthcare and further research.
引用
收藏
页码:382 / 399
页数:18
相关论文
共 38 条
[1]  
Accenture, 2018, Explainable AI: The Next Stage of Human-machine Collaboration
[2]   The Relationship between Behavioral and Attitudinal Trust: A Cross-cultural Study [J].
Ahmed, Ali M. ;
Salas, Osvaldo .
REVIEW OF SOCIAL ECONOMY, 2009, 67 (04) :457-482
[3]   Guidelines for Human-AI Interaction [J].
Amershi, Saleema ;
Weld, Dan ;
Vorvoreanu, Mihaela ;
Fourney, Adam ;
Nushi, Besmira ;
Collisson, Penny ;
Suh, Jina ;
Iqbal, Shamsi ;
Bennett, Paul N. ;
Inkpen, Kori ;
Teevan, Jaime ;
Kikin-Gil, Ruth ;
Horvitz, Eric .
CHI 2019: PROCEEDINGS OF THE 2019 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 2019,
[4]  
Anjomshoae S, 2019, AAMAS '19: PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, P1078
[5]  
[Anonymous], 2018, Key Changes with the General Data Protection Regulation
[6]  
[Anonymous], Scikit-Learn Wisconsin Breast Cancer Database
[7]  
Banerjee Amitav, 2009, Ind Psychiatry J, V18, P127, DOI 10.4103/0972-6748.62274
[8]  
Brownlee J., 2016, Mach. Learn. Mastery
[9]  
Chandrayan P., 2018, codeburst.io
[10]  
Chen A., 2018, IBMS WATSON GAVE UNS