Making AI Accessible for STEM Teachers: Using Explainable AI for Unpacking Classroom Discourse Analysis

被引:1
作者
Wang, Deliang [1 ]
Chen, Gaowei [1 ]
机构
[1] Univ Hong Kong, Fac Educ, Hong Kong, Peoples R China
关键词
Analytical models; Explainable AI; Education; Deep learning; Oral communication; Random forests; Collaboration; Artificial intelligence (AI); classroom discourse; explainable AI; explanations; technology acceptance; trust; AUTOMATION; TRUST;
D O I
10.1109/TE.2024.3421606
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
Contributions: To address the interpretability issues in artificial intelligence (AI)-powered classroom discourse models, we employ explainable AI methods to unpack classroom discourse analysis from deep learning-based models and evaluate the effects of model explanations on STEM teachers. Background: Deep learning techniques have been used to automatically analyze classroom dialogue to provide feedback for teachers. However, these complex models operate as black boxes, lacking clear explanations of the analysis, which may lead teachers, particularly those lacking AI knowledge, to distrust the models and hinder their teaching practice. Therefore, it is crucial to address the interpretability issue in AI-powered classroom discourse models. Research Questions: How to explain deep learning-based classroom discourse models using explainable AI methods? What is the effect of these explanations on teachers' trust in and technology acceptance of the models? How do teachers perceive the explanations of deep learning-based classroom discourse models? Method: Two explainable AI methods were employed to interpret deep learning-based models that analyzed teacher and student talk moves. A pilot study was conducted, involving seven STEM teachers interested in learning talk moves and receiving classroom discourse analysis. The study assessed changes in teachers' trust and technology acceptance before and after receiving model explanations. Teachers' perceptions of the model explanations were investigated. Findings: The AI-powered classroom discourse models were effectively explained using explainable AI methods. The model explanations enhanced teachers' trust and technology acceptance of the classroom discourse models. The seven STEM teachers expressed satisfaction with the explanations and provided their perception of model explanations.
引用
收藏
页码:907 / 918
页数:12
相关论文
共 67 条
  • [11] Devlin J, 2019, Arxiv, DOI [arXiv:1810.04805, DOI 10.48550/ARXIV.1810.04805]
  • [12] Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err
    Dietvorst, Berkeley J.
    Simmons, Joseph P.
    Massey, Cade
    [J]. JOURNAL OF EXPERIMENTAL PSYCHOLOGY-GENERAL, 2015, 144 (01) : 114 - 126
  • [13] Words Matter: Automatic Detection of Teacher Questions in Live Classroom Discourse using Linguistics, Acoustics, and Context
    Donnelly, Patrick J.
    Blanchard, Nathaniel
    Olney, Andrew M.
    Kelly, Sean
    Nystrand, Martin
    D'Mello, Sidney K.
    [J]. SEVENTH INTERNATIONAL LEARNING ANALYTICS & KNOWLEDGE CONFERENCE (LAK'17), 2017, : 218 - 227
  • [14] Multi-Sensor Modeling of Teacher Instructional Segments in Live Classrooms
    Donnelly, Patrick J.
    Blanchard, Nathaniel
    Samei, Borhan
    Olney, Andrew M.
    Sun, Xiaoyi
    Ward, Brooke
    Kelly, Sean
    Nystrand, Martin
    D'Mello, Sidney K.
    [J]. ICMI'16: PROCEEDINGS OF THE 18TH ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2016, : 177 - 184
  • [15] Fast E, 2017, AAAI CONF ARTIF INTE, P963
  • [16] Towards automatic annotation of collaborative problem-solving skills in technology-enhanced environments
    Flor, Michael
    Andrews-Todd, Jessica
    [J]. JOURNAL OF COMPUTER ASSISTED LEARNING, 2022, 38 (05) : 1434 - 1447
  • [17] Gaines-Ross L., 2016, What do People-Not Techies, Not Companies-Think About Artificial Intelligence, V24
  • [18] Do as AI say: susceptibility in deployment of clinical decision-aids
    Gaube, Susanne
    Suresh, Harini
    Raue, Martina
    Merritt, Alexander
    Berkowitz, Seth J.
    Lermer, Eva
    Coughlin, Joseph F.
    Guttag, John V.
    Colak, Errol
    Ghassemi, Marzyeh
    [J]. NPJ DIGITAL MEDICINE, 2021, 4 (01)
  • [19] HUMAN TRUST IN ARTIFICIAL INTELLIGENCE: REVIEW OF EMPIRICAL RESEARCH
    Glikson, Ella
    Woolley, Anita Williams
    [J]. ACADEMY OF MANAGEMENT ANNALS, 2020, 14 (02) : 627 - 660
  • [20] Trust in Automation: Integrating Empirical Evidence on Factors That Influence Trust
    Hoff, Kevin Anthony
    Bashir, Masooda
    [J]. HUMAN FACTORS, 2015, 57 (03) : 407 - 434