An Efficient and Generic Method for Interpreting Deep Learning based Knowledge Tracing Models

被引:0
作者
Wang, Deliang [1 ,3 ]
Lu, Yu [1 ,2 ]
Zhang, Zhi [1 ]
Chen, Penghe [1 ,2 ]
机构
[1] Beijing Normal Univ, Sch Educ Technol, Fac Educ, Beijing, Peoples R China
[2] Beijing Normal Univ, Adv Innovat Ctr Future Educ, Beijing, Peoples R China
[3] Univ Hong Kong, Fac Educ, Hong Kong, Peoples R China
来源
31ST INTERNATIONAL CONFERENCE ON COMPUTERS IN EDUCATION, ICCE 2023, VOL I | 2023年
基金
中国国家自然科学基金;
关键词
Knowledge tracing models; deep learning; explainable artificial intelligence;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning-based knowledge tracing (DLKT) models have been regarded as the promising solution to estimate learners' knowledge states and predict their future performance based on historical exercise records. However, the increasing complexity and diversity make DLKT models still difficult for users, typically including both learners and teachers, to understand models' estimation results, directly hindering the model's deployment and application. Previous studies have explored using methods from explainable artificial intelligence (xAI) to interpret DLKT models, but the methods have been limited in their generalizing capability and inefficient interpreting procedures. To address these limitations, we proposed a simple but efficient model-agnostic interpreting method, called Gradient*Input, to explain the predictions made by these models in two datasets. Comprehensive experiments have been conducted on the existing five DLKT models with representative neural network architectures. The experiment results showed that the method was effective in explaining the predictions of DLKT models. Further analysis of the interpreting results revealed that all five DLKT models share a similar rule in predicting learners' item responses, and the role of skill and temporal information was found and discussed. We also suggested potential avenues for investigating the interpretability of DLKT models.
引用
收藏
页码:2 / 11
页数:10
相关论文
共 33 条
[1]   Knowledge Tracing with Sequential Key-Value Memory Networks [J].
Abdelrahman, Ghodai ;
Wang, Qing .
PROCEEDINGS OF THE 42ND INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '19), 2019, :175-184
[2]   Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) [J].
Adadi, Amina ;
Berrada, Mohammed .
IEEE ACCESS, 2018, 6 :52138-52160
[3]  
[Anonymous], User Modeling and User-Adapted Interaction, V27, P89, DOI [10.1007/s11257-016-9185-7, DOI 10.1007/S11257-016-9185-7]
[4]  
Arras L, 2019, BLACKBOXNLP WORKSHOP ON ANALYZING AND INTERPRETING NEURAL NETWORKS FOR NLP AT ACL 2019, P113
[5]  
CORBETT AT, 1994, USER MODEL USER-ADAP, V4, P253, DOI 10.1007/BF01099821
[6]  
Cortez P., 2011, Proceedings 2011 IEEE Symposium on Computational Intelligence and Data Mining (CIDM 2011), P341, DOI 10.1109/CIDM.2011.5949423
[7]   Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err [J].
Dietvorst, Berkeley J. ;
Simmons, Joseph P. ;
Massey, Cade .
JOURNAL OF EXPERIMENTAL PSYCHOLOGY-GENERAL, 2015, 144 (01) :114-126
[8]   Addressing the assessment challenge with an online system that tutors as it assesses [J].
Feng, Mingyu ;
Heffernan, Neil ;
Koedinger, Kenneth .
USER MODELING AND USER-ADAPTED INTERACTION, 2009, 19 (03) :243-266
[9]   Context-Aware Attentive Knowledge Tracing [J].
Ghosh, Aritra ;
Heffernan, Neil ;
Lan, Andrew S. .
KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, :2330-2339
[10]   Peeking Inside the Black Box: Visualizing Statistical Learning With Plots of Individual Conditional Expectation [J].
Goldstein, Alex ;
Kapelner, Adam ;
Bleich, Justin ;
Pitkin, Emil .
JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS, 2015, 24 (01) :44-65