Making AI Accessible for STEM Teachers: Using Explainable AI for Unpacking Classroom Discourse Analysis

被引:1
作者
Wang, Deliang [1 ]
Chen, Gaowei [1 ]
机构
[1] Univ Hong Kong, Fac Educ, Hong Kong, Peoples R China
关键词
Analytical models; Explainable AI; Education; Deep learning; Oral communication; Random forests; Collaboration; Artificial intelligence (AI); classroom discourse; explainable AI; explanations; technology acceptance; trust; AUTOMATION; TRUST;
D O I
10.1109/TE.2024.3421606
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
Contributions: To address the interpretability issues in artificial intelligence (AI)-powered classroom discourse models, we employ explainable AI methods to unpack classroom discourse analysis from deep learning-based models and evaluate the effects of model explanations on STEM teachers. Background: Deep learning techniques have been used to automatically analyze classroom dialogue to provide feedback for teachers. However, these complex models operate as black boxes, lacking clear explanations of the analysis, which may lead teachers, particularly those lacking AI knowledge, to distrust the models and hinder their teaching practice. Therefore, it is crucial to address the interpretability issue in AI-powered classroom discourse models. Research Questions: How to explain deep learning-based classroom discourse models using explainable AI methods? What is the effect of these explanations on teachers' trust in and technology acceptance of the models? How do teachers perceive the explanations of deep learning-based classroom discourse models? Method: Two explainable AI methods were employed to interpret deep learning-based models that analyzed teacher and student talk moves. A pilot study was conducted, involving seven STEM teachers interested in learning talk moves and receiving classroom discourse analysis. The study assessed changes in teachers' trust and technology acceptance before and after receiving model explanations. Teachers' perceptions of the model explanations were investigated. Findings: The AI-powered classroom discourse models were effectively explained using explainable AI methods. The model explanations enhanced teachers' trust and technology acceptance of the classroom discourse models. The seven STEM teachers expressed satisfaction with the explanations and provided their perception of model explanations.
引用
收藏
页码:907 / 918
页数:12
相关论文
共 67 条
  • [1] Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
    Adadi, Amina
    Berrada, Mohammed
    [J]. IEEE ACCESS, 2018, 6 : 52138 - 52160
  • [2] Alexander R.J., 2017, Towards Dialogic Teaching: Rethinking classroom talk, V5th edn
  • [3] Alvarez-Melis D, 2018, ADV NEUR IN, V31
  • [4] The interrelationship between intelligent agents' characteristics and users' intention in a search engine by making beliefs and perceived risks mediators
    Chao, Chih-Yang
    Chang, Tsai-Chu
    Wu, Hui-Chun
    Lin, Yong-Shun
    Chen, Po-Chen
    [J]. COMPUTERS IN HUMAN BEHAVIOR, 2016, 64 : 117 - 125
  • [5] Efficacy of video-based teacher professional development for increasing classroom discourse and student learning
    Chen, Gaowei
    Chan, Carol K. K.
    Chan, Kennedy K. H.
    Clarke, Sherice N.
    Resnick, Lauren B.
    [J]. JOURNAL OF THE LEARNING SCIENCES, 2020, 29 (4-5) : 642 - 680
  • [6] Chen Gaowei, 2015, Technology, Instruction, Cognition and Learning, V10, P85
  • [7] Discussion-record-based prediction model for creativity education using clustering methods
    Chien, Yu-Cheng
    Liu, Ming-Chi
    Wu, Ting-Ting
    [J]. THINKING SKILLS AND CREATIVITY, 2020, 36
  • [8] Toward personalized XAI: A case study in intelligent tutoring systems
    Conati, Cristina
    Barral, Oswald
    Putnam, Vanessa
    Rieger, Lea
    [J]. ARTIFICIAL INTELLIGENCE, 2021, 298
  • [9] Cook C., 2018, P INT C ED DAT MIN, P1
  • [10] Demszky D, 2021, 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1 (ACL-IJCNLP 2021), P1638