Utilizing a Pretrained Language Model (BERT) to Classify Preservice Physics Teachers' Written Reflections

被引:26
作者
Wulff, Peter [1 ]
Mientus, Lukas [2 ]
Nowak, Anna [2 ]
Borowski, Andreas [2 ]
机构
[1] Heidelberg Univ Educ, Phys Educ Res, Neuenheimer Feld 560-562, D-69120 Heidelberg, Germany
[2] Univ Potsdam, Inst Phys & Astron, Karl Liebknecht Str 24-25, D-14476 Potsdam, Germany
关键词
Reflective writing; NLP; Deep learning; Science education; AUTOMATED-ANALYSIS; AUTOTUTOR; KNOWLEDGE; COURSES; SYSTEM;
D O I
10.1007/s40593-022-00290-6
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Computer-based analysis of preservice teachers' written reflections could enable educational scholars to design personalized and scalable intervention measures to support reflective writing. Algorithms and technologies in the domain of research related to artificial intelligence have been found to be useful in many tasks related to reflective writing analytics such as classification of text segments. However, mostly shallow learning algorithms have been employed so far. This study explores to what extent deep learning approaches can improve classification performance for segments of written reflections. To do so, a pretrained language model (BERT) was utilized to classify segments of preservice physics teachers' written reflections according to elements in a reflection-supporting model. Since BERT has been found to advance performance in many tasks, it was hypothesized to enhance classification performance for written reflections as well. We also compared the performance of BERT with other deep learning architectures and examined conditions for best performance. We found that BERT outperformed the other deep learning architectures and previously reported performances with shallow learning algorithms for classification of segments of reflective writing. BERT starts to outperform the other models when trained on about 20 to 30% of the training data. Furthermore, attribution analyses for inputs yielded insights into important features for BERT's classification decisions. Our study indicates that pretrained language models such as BERT can boost performance for language-related tasks in educational contexts such as classification.
引用
收藏
页码:439 / 466
页数:28
相关论文
共 86 条
[1]  
Abels S., 2011, LehrerInnen als "Reflective Practitioner". Reflexionskompetenz fur einen demokratieforderlichen Naturwissenschaftsunterricht 1. Aufl
[2]  
Aeppli J., 2016, Beitrage zur Lehrerinnen- und Lehrerbildung, V34, P78, DOI DOI 10.36950/BZL.34.1.2016.9540
[3]  
Aleven V, 2017, EDUC PSYCHOL HANDB, P522
[4]  
Bain J.D., 2002, TEACH TEACH, V8, P171
[5]  
Bain JohnD., 1999, TEACHERS TEACHING TH, V5, P51, DOI DOI 10.1080/1354060990050104
[6]  
Berliner D., 2001, International Journal of Educational Research, P463, DOI [10.1016/S0883-0355, DOI 10.1016/S0883-0355(02)00004-6, 10.1016/S0883-0355(02)00004-6]
[7]   Statistical modeling: The two cultures [J].
Breiman, L .
STATISTICAL SCIENCE, 2001, 16 (03) :199-215
[8]  
Brown TB, 2020, ADV NEUR IN, V33
[9]  
Carlson J., 2019, Repositioning Pedagogical Content Knowledge in Teachers Knowledge for Teaching Science
[10]   Automated Analysis of Middle School Students' Written Reflections During Game-Based Learning [J].
Carpenter, Dan ;
Geden, Michael ;
Rowe, Jonathan ;
Azevedo, Roger ;
Lester, James .
ARTIFICIAL INTELLIGENCE IN EDUCATION (AIED 2020), PT I, 2020, 12163 :67-78