Explainable Learning Analytics: Assessing the stability of student success prediction models by means of explainable AI

被引:2
作者
Tiukhova, Elena [1 ]
Vemuri, Pavani [1 ]
Flores, Nidia Lopez [2 ]
Islind, Anna Sigridur [2 ]
Oskarsdottir, Maria [2 ]
Poelmans, Stephan [1 ]
Baesens, Bart [1 ,3 ]
Snoeck, Monique [1 ]
机构
[1] Katholieke Univ Leuven, LIRIS, Naamsestraat 69, B-3000 Leuven, Belgium
[2] Reykjavik Univ, Dept Comp Sci, Menntavegi 1, IS-102 Reykjavik, Iceland
[3] Univ Southampton, Dept Decis Analyt & Risk, Univ Rd, Southampton SO17 1BJ, England
关键词
Learning analytics; Self-regulated learning; Explainable AI; Model stability;
D O I
10.1016/j.dss.2024.114229
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Beyond managing student dropout, higher education stakeholders need decision support to consistently influence the student learning process to keep students motivated, engaged, and successful. At the course level, the combination of predictive analytics and self -regulation theory can help instructors determine the best study advice and allow learners to better self -regulate and determine how they want to learn. The best performing techniques are often black -box models that favor performance over interpretability and are heavily influenced by course contexts. In this study, we argue that explainable AI has the potential not only to uncover the reasons behind model decisions, but also to reveal their stability across contexts, effectively bridging the gap between predictive and explanatory learning analytics (LA). In contributing to decision support systems research, this study (1) leverages traditional techniques, such as concept drift and performance drift, to investigate the stability of student success prediction models over time; (2) uses Shapley Additive explanations in a novel way to explore the stability of extracted feature importance rankings generated for these models; (3) generates new insights that emerge from stable features across cohorts, enabling teachers to determine study advice. We believe this study makes a strong contribution to education research at large and expands the field of LA by augmenting the interpretability and explainability of prediction algorithms and ensuring their applicability in changing contexts.
引用
收藏
页数:14
相关论文
共 66 条
  • [1] Factors Affecting Students' Performance in Higher Education: A Systematic Review of Predictive Data Mining Techniques
    Abu Saa, Amjed
    Al-Emran, Mostafa
    Shaalan, Khaled
    [J]. TECHNOLOGY KNOWLEDGE AND LEARNING, 2019, 24 (04) : 567 - 598
  • [2] On the Explanation of AI-Based Student Success Prediction
    Afrin, Farzana
    Hamilton, Margaret
    Thevathyan, Charles
    [J]. COMPUTATIONAL SCIENCE, ICCS 2022, PT II, 2022, : 252 - 258
  • [3] Explainable AI for Data-Driven Feedback and Intelligent Action Recommendations to Support Students Self-Regulation
    Afzaal, Muhammad
    Nouri, Jalal
    Zia, Aayesha
    Papapetrou, Panagiotis
    Fors, Uno
    Wu, Yongchao
    Li, Xiu
    Weegar, Rebecka
    [J]. FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2021, 4
  • [4] Anh Bui Ngoc, 2023, 2023 IEEE Symposium on Industrial Electronics & Applications (ISIEA), P1, DOI 10.1109/ISIEA58478.2023.10212223
  • [5] Arik SO, 2021, AAAI CONF ARTIF INTE, V35, P6679
  • [6] Enhancing the prediction of student performance based on the machine learning XGBoost algorithm
    Asselman, Amal
    Khaldi, Mohamed
    Aammou, Souhaib
    [J]. INTERACTIVE LEARNING ENVIRONMENTS, 2023, 31 (06) : 3360 - 3379
  • [7] Baesens B., 2016, Credit risk analytics: Measurement techniques, applications, and examples in SAS
  • [8] Baranyi Mate, 2020, SIGITE '20: Proceedings of the 21st Annual Conference on Information Technology Education, P13, DOI 10.1145/3368308.3415382
  • [9] From concept drift to model degradation: An overview on performance-aware drift detectors
    Bayram, Firas
    Ahmed, Bestoun S.
    Kassler, Andreas
    [J]. KNOWLEDGE-BASED SYSTEMS, 2022, 245
  • [10] Random forests
    Breiman, L
    [J]. MACHINE LEARNING, 2001, 45 (01) : 5 - 32