Understanding the impact of explanations on advice-taking: a user study for AI-based clinical Decision Support Systems

被引:78
作者
Panigutti, Cecilia [1 ,2 ]
Beretta, Andrea [3 ]
Pedreschi, Dino [1 ]
Giannotti, Fosca [2 ,3 ]
机构
[1] Univ Pisa, Pisa, Italy
[2] Scuola Normale Super Pisa, Pisa, Italy
[3] CNR, Pisa, Italy
来源
PROCEEDINGS OF THE 2022 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI' 22) | 2022年
基金
欧洲研究理事会; 欧盟地平线“2020”;
关键词
XAI; eXplainable AI; HCI; User Study; Behavioral intention; Trust; Advice-taking; Clinical Decision Support System; ARTIFICIAL-INTELLIGENCE; AUTOMATION BIAS; TECHNOLOGY; ACCEPTANCE; TRUST;
D O I
10.1145/3491102.3502104
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The field of eXplainable Artificial Intelligence (XAI) focuses on providing explanations for AI systems' decisions. XAI applications to AI-based Clinical Decision Support Systems (DSS) should increase trust in the DSS by allowing clinicians to investigate the reasons behind its suggestions. In this paper, we present the results of a user study on the impact of advice from a clinical DSS on healthcare providers' judgment in two different cases: the case where the clinical DSS explains its suggestion and the case it does not. We examined the weight of advice, the behavioral intention to use the system, and the perceptions with quantitative and qualitative measures. Our results indicate a more significant impact of advice when an explanation for the DSS decision is provided. Additionally, through the open-ended questions, we provide some insights on how to improve the explanations in the diagnosis forecasts for healthcare assistants, nurses, and doctors.
引用
收藏
页数:9
相关论文
共 86 条
[1]  
Adams Barbara D, 2003, TRUST AUTOMATED SYST
[2]   Current Challenges and Future Opportunities for XAI in Machine Learning-Based Clinical Decision Support Systems: A Systematic Review [J].
Antoniadi, Anna Markella ;
Du, Yuhan ;
Guendouz, Yasmine ;
Wei, Lan ;
Mazo, Claudia ;
Becker, Brett A. ;
Mooney, Catherine .
APPLIED SCIENCES-BASEL, 2021, 11 (11)
[3]  
Arya V., 2019, ARXIV190903012
[4]  
Barnett Alina Jade, 2021, NATURE MACHINE INTEL, V2021, P1
[5]   Explainable Machine Learning in Deployment [J].
Bhatt, Umang ;
Xiang, Alice ;
Sharma, Shubham ;
Weller, Adrian ;
Taly, Ankur ;
Jia, Yunhan ;
Ghosh, Joydeep ;
Puri, Ruchir ;
Moura, Jose M. F. ;
Eckersley, Peter .
FAT* '20: PROCEEDINGS OF THE 2020 CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, 2020, :648-657
[6]   Inventing Artificial Intelligence in Ethiopia [J].
Blackwell, Alan F. ;
Damena, Addisu ;
Tegegne, Tesfa .
INTERDISCIPLINARY SCIENCE REVIEWS, 2021, 46 (03) :363-385
[7]  
Bodria Francesco, 2021, ARXIV210213076
[8]   Shared Mental Models in Human-Machine Systems [J].
Borst, Clark .
IFAC PAPERSONLINE, 2016, 49 (19) :195-200
[9]   What Do People Really Want When They Say They Want "Explainable Al"? We Asked 60 Stakeholders [J].
Brennen, Andrea .
CHI'20: EXTENDED ABSTRACTS OF THE 2020 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 2020,
[10]   To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making [J].
Buçinca Z. ;
Malaya M.B. ;
Gajos K.Z. .
Proceedings of the ACM on Human-Computer Interaction, 2021, 5 (CSCW1)