Assessing Post-hoc Explainability of the BKT Algorithm

被引:9
|
作者
Zhou, Tongyu [1 ]
Sheng, Haoyu [1 ]
Howley, Iris [1 ]
机构
[1] Williams Coll, Williamstown, MA 01267 USA
来源
PROCEEDINGS OF THE 3RD AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY AIES 2020 | 2020年
关键词
explainable AI; post-hoc explanations; interpretability of algorithms; communicating algorithmic systems; evaluation of xAI systems;
D O I
10.1145/3375627.3375856
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As machine intelligence is increasingly incorporated into educational technologies, it becomes imperative for instructors and students to understand the potential flaws of the algorithms on which their systems rely. This paper describes the design and implementation of an interactive post-hoc explanation of the Bayesian Knowledge Tracing algorithm which is implemented in learning analytics systems used across the United States. After a user-centered design process to smooth out interaction design difficulties, we ran a controlled experiment to evaluate whether the interactive or static version of the explainable led to increased learning. Our results reveal that learning about an algorithm through an explainable depends on users' educational background. For other contexts, designers of post-hoc explainables must consider their users' educational background to best determine how to empower more informed decision-making with AI-enhanced systems.
引用
收藏
页码:407 / 413
页数:7
相关论文
empty
未找到相关数据