Assessing the Quality of Student-Generated Content at Scale: A Comparative Analysis of Peer-Review Models

被引:6
作者
Darvishi, Ali [1 ]
Khosravi, Hassan [1 ]
Rahimi, Afshin [1 ]
Sadiq, Shazia [1 ]
Gasevic, Dragan [2 ]
机构
[1] Univ Queensland, Sch Informat Technol & Elect Engn, Brisbane, Qld 4072, Australia
[2] Monash Univ, Fac Informat Technol, Melbourne, Vic 3800, Australia
来源
IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES | 2023年 / 16卷 / 01期
基金
澳大利亚研究理事会;
关键词
Reliability; Analytical models; Probabilistic logic; Crowdsourcing; Task analysis; Data models; Adaptation models; Consensus approaches; crowdsourcing in education; learnersourcing; learning analytics; peer review; FEEDBACK; SIMILARITY; FUTURE;
D O I
10.1109/TLT.2022.3229022
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Engaging students in creating learning resources has demonstrated pedagogical benefits. However, to effectively utilize a repository of student-generated content (SGC), a selection process is needed to separate high- from low-quality resources as some of the resources created by students can be ineffective, inappropriate, or incorrect. A common and scalable approach is to use a peer-review process where students are asked to assess the quality of resources authored by their peers. Given that judgments of students, as experts-in-training, cannot wholly be relied upon, a redundancy-based method is widely employed where the same assessment task is given to multiple students. However, this approach introduces a new challenge, referred to as the consensus problem: How can we assign a final quality to a resource given ratings by multiple students? To address this challenge, we investigate the predictive performance of 18 inference models across five well-established categories of consensus approaches for inferring the quality of SGC at scale. The analysis is based on the engagement of 2141 undergraduate students across five courses in creating 12 803 resources and 77 297 peer reviews. Results indicate that the quality of reviews is quite diverse, and students tend to overrate. Consequently, simple statistics such as mean and median fail to identify poor-quality resources. Findings further suggest that incorporating advanced probabilistic and text analysis methods to infer the reviewers' reliability and reviews' quality improves performance; however, there is still an evident need for instructor oversight and training of students to write compelling and reliable reviews.
引用
收藏
页码:106 / 120
页数:15
相关论文
共 102 条
  • [61] Factors influencing students' peer feedback uptake: instructional design matters
    Mercader, Cristina
    Ion, Georgeta
    Diaz-Vicario, Anna
    [J]. ASSESSMENT & EVALUATION IN HIGHER EDUCATION, 2020, 45 (08) : 1169 - 1180
  • [62] The expectation-maximization algorithm
    Moon, TK
    [J]. IEEE SIGNAL PROCESSING MAGAZINE, 1996, 13 (06) : 47 - 60
  • [63] Evaluating Crowdsourcing and Topic Modeling in Generating Knowledge Components from Explanations
    Moore, Steven
    Nguyen, Huy A.
    Stamper, John
    [J]. ARTIFICIAL INTELLIGENCE IN EDUCATION (AIED 2020), PT I, 2020, 12163 : 398 - 410
  • [64] Bayesian probabilistic tensor factorization for recommendation and rating aggregation with multicriteria evaluation data
    Morise, Hiroki
    Oyama, Satoshi
    Kurihara, Masahito
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2019, 131 : 1 - 8
  • [65] Morise H, 2017, IEEE INT CONF BIG DA, P4417, DOI 10.1109/BigData.2017.8258477
  • [66] Naldi M, 2019, Arxiv, DOI [arXiv:1901.08319, DOI 10.48550/ARXIV.1901.08319]
  • [67] The Future of Student Self-Assessment: a Review of Known Unknowns and Potential Directions
    Panadero, Ernesto
    Brown, Gavin T. L.
    Strijbos, Jan-Willem
    [J]. EDUCATIONAL PSYCHOLOGY REVIEW, 2016, 28 (04) : 803 - 830
  • [68] Peering into large lectures: examining peer and expert mark agreement using peerScholar, an online peer assessment tool
    Pare, D. E.
    Joordens, S.
    [J]. JOURNAL OF COMPUTER ASSISTED LEARNING, 2008, 24 (06) : 526 - 540
  • [69] Park O., 2003, Handbook of Research for Educational Communications and Technology, P651
  • [70] Mapping the Landscape of Peer Review in Computing Education Research
    Petre, Marian
    Sanders, Kate
    McCartney, Robert
    Ahmadzadeh, Marzieh
    Connolly, Cornelia
    Hamouda, Sally
    Harrington, Brian
    Lumbroso, Jeremie
    Maguire, Joseph
    Malmi, Lauri
    McGill, Monica M.
    Vahrenhold, Jan
    [J]. ITICSE-WGR'20: PROCEEDINGS OF THE WORKING GROUP REPORTS ON INNOVATION AND TECHNOLOGY IN COMPUTER SCIENCE EDUCATION, 2020, : 173 - 209