Enhancing Instructors' Capability to Assess Open-Response Using Natural Language Processing and Learning Analytics

被引:5
作者
Mello, Rafael Ferreira [1 ,2 ]
Neto, Rodrigues [1 ]
Fiorentino, Giuseppe [1 ]
Alves, Gabriel [1 ]
Aredes, Verenna [1 ]
Galdino Ferreira Silva, Joao Victor [1 ]
Falcao, Taciana Pontual [1 ]
Gasevic, Dragan [2 ]
机构
[1] Univ Fed Rural Pernambuco, Rua Dom Manuel de Medeiros S-N, BR-52171900 Recife, PE, Brazil
[2] Monash Univ, 20 Exhibit Walk, Clayton, Vic 3800, Australia
来源
EDUCATING FOR A NEW FUTURE: MAKING SENSE OF TECHNOLOGY-ENHANCED LEARNING ADOPTION, EC-TEL 2022 | 2022年 / 13450卷
关键词
Open-response evaluations; Learning analytics; Natural language processing; Recommendation system; FORMATIVE ASSESSMENT;
D O I
10.1007/978-3-031-16290-9_8
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Assessments are crucial to measuring student progress and providing constructive feedback. However, the instructors have a huge workload, which leads to the application of more superficial assessments that, sometimes, does not include the necessary questions and activities to evaluate the students adequately. For instance, it is well-known that open-ended questions and textual productions can stimulate students to develop critical thinking and knowledge construction skills, but this type of question requires much effort and time in the evaluation process. Previous works have focused on automatically scoring open-ended responses based on the similarity of the students' answers with a reference solution provided by the instructor. This approach has its benefits and several drawbacks, such as the failure to provide quality feedback for students and the possible inclusion of negative bias in the activities assessment. To address these challenges, this paper presents a new approach that combines learning analytics and natural language processing methods to support the instructor in assessing open-ended questions. The main novelty of this paper is the replacement of the similarity analysis with a tag recommendation algorithm to automatically assign correct statements and errors already known to the responses, along with an explanation for each tag.
引用
收藏
页码:102 / 115
页数:14
相关论文
共 40 条
  • [1] Recommendations for Orchestration of Formative Assessment Sequences: A Data-Driven Approach
    Andriamiseza, Rialy
    Silvestre, Franck
    Parmentier, Jean-Francois
    Broisin, Julien
    [J]. TECHNOLOGY-ENHANCED LEARNING FOR A FREE, SAFE, AND SUSTAINABLE WORLD, EC-TEL 2021, 2021, 12884 : 245 - 259
  • [2] Barthakur A., 2022, IEEE T LEARN TECHNOL
  • [3] Open Learner Models and Learning Analytics Dashboards: A Systematic Review
    Bodily, Robert
    Kay, Judy
    Aleven, Vincent
    Jivet, Ioana
    Davis, Dan
    Xhakaj, Franceska
    Verbert, Katrien
    [J]. PROCEEDINGS OF THE 8TH INTERNATIONAL CONFERENCE ON LEARNING ANALYTICS & KNOWLEDGE (LAK'18): TOWARDS USER-CENTRED LEARNING ANALYTICS, 2018, : 41 - 50
  • [4] Automated Short Answer Grading Using Deep Learning: A Survey
    Bonthu, Sridevi
    Sree, S. Rama
    Prasad, M. H. M. Krishna
    [J]. MACHINE LEARNING AND KNOWLEDGE EXTRACTION (CD-MAKE 2021), 2021, 12844 : 61 - 78
  • [5] Brown S., 2005, Learning and Teaching in Higher Education, V1, P81, DOI DOI 10.1187/CBE.11-03-0025
  • [6] Investigating Transformers for Automatic Short Answer Grading
    Camus, Leon
    Filighera, Anna
    [J]. ARTIFICIAL INTELLIGENCE IN EDUCATION (AIED 2020), PT II, 2020, 12164 : 43 - 48
  • [7] Cavalcanti A. P., 2021, COMPUT ED ARTIF INTE, V2, DOI [DOI 10.1016/J.CAEAI.2021.100027, 10.1016/j.caeai.2021.100027]
  • [8] Chowdary, 2020, FUNDAMENTALS ARTIFIC, DOI [10.1007/978-81-322-3972-7_19, DOI 10.1007/978-81-322-3972-7_19, DOI 10.1007/978-81-322-3972-719]
  • [9] Clow D, 2012, P 2 INT C LEARN AN K, P134, DOI [10.1145/2330601.2330636, DOI 10.1145/2330601.2330636]
  • [10] Cutrone Laurie Ane, 2010, 2010 IEEE 10th International Conference on Advanced Learning Technologies (ICALT 2010), P143, DOI 10.1109/ICALT.2010.47