Making AI Accessible for STEM Teachers: Using Explainable AI for Unpacking Classroom Discourse Analysis

被引:1
作者
Wang, Deliang [1 ]
Chen, Gaowei [1 ]
机构
[1] Univ Hong Kong, Fac Educ, Hong Kong, Peoples R China
关键词
Analytical models; Explainable AI; Education; Deep learning; Oral communication; Random forests; Collaboration; Artificial intelligence (AI); classroom discourse; explainable AI; explanations; technology acceptance; trust; AUTOMATION; TRUST;
D O I
10.1109/TE.2024.3421606
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
Contributions: To address the interpretability issues in artificial intelligence (AI)-powered classroom discourse models, we employ explainable AI methods to unpack classroom discourse analysis from deep learning-based models and evaluate the effects of model explanations on STEM teachers. Background: Deep learning techniques have been used to automatically analyze classroom dialogue to provide feedback for teachers. However, these complex models operate as black boxes, lacking clear explanations of the analysis, which may lead teachers, particularly those lacking AI knowledge, to distrust the models and hinder their teaching practice. Therefore, it is crucial to address the interpretability issue in AI-powered classroom discourse models. Research Questions: How to explain deep learning-based classroom discourse models using explainable AI methods? What is the effect of these explanations on teachers' trust in and technology acceptance of the models? How do teachers perceive the explanations of deep learning-based classroom discourse models? Method: Two explainable AI methods were employed to interpret deep learning-based models that analyzed teacher and student talk moves. A pilot study was conducted, involving seven STEM teachers interested in learning talk moves and receiving classroom discourse analysis. The study assessed changes in teachers' trust and technology acceptance before and after receiving model explanations. Teachers' perceptions of the model explanations were investigated. Findings: The AI-powered classroom discourse models were effectively explained using explainable AI methods. The model explanations enhanced teachers' trust and technology acceptance of the classroom discourse models. The seven STEM teachers expressed satisfaction with the explanations and provided their perception of model explanations.
引用
收藏
页码:907 / 918
页数:12
相关论文
共 67 条
[31]  
Li J., 2016, P 2016 C N AM CHAPTE, P681, DOI DOI 10.18653/V1/N16-1082
[32]   Interpreting Deep Learning Models for Knowledge Tracing [J].
Lu, Yu ;
Wang, Deliang ;
Chen, Penghe ;
Meng, Qinggang ;
Yu, Shengquan .
INTERNATIONAL JOURNAL OF ARTIFICIAL INTELLIGENCE IN EDUCATION, 2023, 33 (03) :519-542
[33]   The Challenge of Noisy Classrooms: Speaker Detection During Elementary Students' Collaborative Dialogue [J].
Ma, Yingbo ;
Wiggins, Joseph B. ;
Celepkolu, Mehmet ;
Boyer, Kristy Elizabeth ;
Lynch, Collin ;
Wiebe, Eric .
ARTIFICIAL INTELLIGENCE IN EDUCATION (AIED 2021), PT I, 2021, 12748 :268-281
[34]   Similarities and differences between human-human and human-automation trust: an integrative review [J].
Madhavan, P. ;
Wiegmann, D. A. .
THEORETICAL ISSUES IN ERGONOMICS SCIENCE, 2007, 8 (04) :277-301
[35]  
Maguire M., 2017, ALL IRELAND J HIGHER, V9
[36]  
Manikonda L., 2018, P INT AAAI C WEB SOC, V12, P652
[37]   Affective Processes in Human-Automation Interactions [J].
Merritt, Stephanie M. .
HUMAN FACTORS, 2011, 53 (04) :356-370
[38]  
Michaels S., 2012, Talk Science Primer, DOI [DOI 10.3102/978-0-935302-43-1_27, DOI 10.3102/978-0-935302-43-127]
[39]   Explanation in artificial intelligence: Insights from the social sciences [J].
Miller, Tim .
ARTIFICIAL INTELLIGENCE, 2019, 267 :1-38
[40]  
Mohlmann M., 2017, P INT C INF SYST AUT, P1