Contextualizing User Perceptions about Biases for Human-Centered Explainable Artificial Intelligence

被引:8
|
作者
Yuan, Chien Wen [1 ]
Bi, Nanyi [1 ]
Lin, Ya-Fang [2 ]
Tseng, Yuen-Hsien [3 ]
机构
[1] Natl Taiwan Univ, Taipei, Taiwan
[2] Penn State Univ, State Coll, PA USA
[3] Natl Taiwan Normal Univ, Taipei, Taiwan
来源
PROCEEDINGS OF THE 2023 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2023 | 2023年
关键词
Artifcial Intelligence; Human-Computer Interaction (HCI); Explainable AI (XAI); Human-Centered Computing; Explainability; Transparency; AI bias;
D O I
10.1145/3544548.3580945
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Biases in Artifcial Intelligence (AI) systems or their results are one important issue that demands AI explainability. Despite the prevalence of AI applications, the general public are not necessarily equipped with the ability to understand how the black-box algorithms work and how to deal with biases. To inform designs for explainable AI (XAI), we conducted in-depth interviews with major stakeholders, both end-users (n = 24) and engineers (n = 15), to investigate how they made sense of AI applications and the associated biases according to situations of high and low stakes. We discussed users' perceptions and attributions about AI biases and their desired levels and types of explainability. We found that personal relevance and boundaries as well as the level of stake are two major dimensions for developing user trust especially during biased situations and informing XAI designs.
引用
收藏
页数:15
相关论文
共 50 条