Assessing the Quality of Student-Generated Content at Scale: A Comparative Analysis of Peer-Review Models

被引:6
作者
Darvishi, Ali [1 ]
Khosravi, Hassan [1 ]
Rahimi, Afshin [1 ]
Sadiq, Shazia [1 ]
Gasevic, Dragan [2 ]
机构
[1] Univ Queensland, Sch Informat Technol & Elect Engn, Brisbane, Qld 4072, Australia
[2] Monash Univ, Fac Informat Technol, Melbourne, Vic 3800, Australia
来源
IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES | 2023年 / 16卷 / 01期
基金
澳大利亚研究理事会;
关键词
Reliability; Analytical models; Probabilistic logic; Crowdsourcing; Task analysis; Data models; Adaptation models; Consensus approaches; crowdsourcing in education; learnersourcing; learning analytics; peer review; FEEDBACK; SIMILARITY; FUTURE;
D O I
10.1109/TLT.2022.3229022
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Engaging students in creating learning resources has demonstrated pedagogical benefits. However, to effectively utilize a repository of student-generated content (SGC), a selection process is needed to separate high- from low-quality resources as some of the resources created by students can be ineffective, inappropriate, or incorrect. A common and scalable approach is to use a peer-review process where students are asked to assess the quality of resources authored by their peers. Given that judgments of students, as experts-in-training, cannot wholly be relied upon, a redundancy-based method is widely employed where the same assessment task is given to multiple students. However, this approach introduces a new challenge, referred to as the consensus problem: How can we assign a final quality to a resource given ratings by multiple students? To address this challenge, we investigate the predictive performance of 18 inference models across five well-established categories of consensus approaches for inferring the quality of SGC at scale. The analysis is based on the engagement of 2141 undergraduate students across five courses in creating 12 803 resources and 77 297 peer reviews. Results indicate that the quality of reviews is quite diverse, and students tend to overrate. Consequently, simple statistics such as mean and median fail to identify poor-quality resources. Findings further suggest that incorporating advanced probabilistic and text analysis methods to infer the reviewers' reliability and reviews' quality improves performance; however, there is still an evident need for instructor oversight and training of students to write compelling and reliable reviews.
引用
收藏
页码:106 / 120
页数:15
相关论文
共 102 条
  • [31] El Maarry Kinda, 2015, Web Information Systems Engineering - WISE 2015. 16th International Conference. Proceedings: LNCS 9418, P293, DOI 10.1007/978-3-319-26190-4_20
  • [32] A collaborative learning approach to dialogic peer feedback: a theoretical framework
    Er, Erkan
    Dimitriadis, Yannis
    Gasevic, Dragan
    [J]. ASSESSMENT & EVALUATION IN HIGHER EDUCATION, 2021, 46 (04) : 586 - 600
  • [33] Product-Aware Helpfulness Prediction of Online Reviews
    Fan, Miao
    Feng, Chao
    Guo, Lin
    Sun, Mingming
    Li, Ping
    [J]. WEB CONFERENCE 2019: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2019), 2019, : 2715 - 2721
  • [34] Fan X., 2020, PMLR, P2996
  • [35] Doing it for themselves: students creating a high quality peer-learning environment
    Galloway, Kyle W.
    Burns, Simon
    [J]. CHEMISTRY EDUCATION RESEARCH AND PRACTICE, 2015, 16 (01) : 82 - 92
  • [36] Learnersourcing Personalized Hints
    Glassman, Elena L.
    Lin, Aaron
    Cai, Carrie J.
    Miller, Robert C.
    [J]. ACM CONFERENCE ON COMPUTER-SUPPORTED COOPERATIVE WORK AND SOCIAL COMPUTING (CSCW 2016), 2016, : 1626 - 1636
  • [37] Guha R., 2004, P 13 INT C WORLD WID, P403, DOI DOI 10.1145/988672.988727
  • [38] Guo Philip J., 2020, L@S '20. Proceedings of the Seventh ACM Conference on Learning @ Scale, P301, DOI 10.1145/3386527.3406733
  • [39] Supporting peer evaluation of student-generated content: a study of three approaches
    Gyamfi, George
    Hanna, Barbara
    Khosravi, Hassan
    [J]. ASSESSMENT & EVALUATION IN HIGHER EDUCATION, 2022, 47 (07) : 1129 - 1147
  • [40] The effects of rubrics on evaluative judgement: a randomised controlled experiment
    Gyamfi, George
    Hanna, Barbara E.
    Khosravi, Hassan
    [J]. ASSESSMENT & EVALUATION IN HIGHER EDUCATION, 2022, 47 (01) : 126 - 143