Employing Peer Review to Evaluate the Quality of Student Generated Content at Scale: A Trust Propagation Approach

被引:14
作者
Darvishi, Ali [1 ]
Khosravi, Hassan [1 ]
Sadiq, Shazia [1 ]
机构
[1] Univ Queensland, Brisbane, Qld, Australia
来源
PROCEEDINGS OF THE EIGHTH ACM CONFERENCE ON LEARNING @ SCALE, L@S 2021 | 2021年
关键词
Learnersourcing; Crowdsourcing in Education; Learning Analytics; Peer Review; Consensus Approaches; Trust Propagation;
D O I
10.1145/3430895.3460129
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Engaging students in the creation of learning resources has been demonstrated to have pedagogical benefits and lead to the creation of large repositories of learning resources which can be used to complement student learning in different ways. However, to effectively utilise a learnersourced repository of content, a selection process is needed to separate high-quality from low-quality resources as some of the resources created by students can be ineffective, inappropriate, or incorrect. A common and scalable approach to evaluating the quality of learnersourced content is to use a peer review process where students are asked to assess the quality of resources authored by their peers. However, this method poses the problem of "truth inference" since the judgements of students as experts-in-training cannot wholly be trusted. This paper presents a graph-based approach to propagate the reliability and trust using data from peer and instructor evaluations in order to simultaneously infer the quality of the learnersourced content and the reliability and trustworthiness of users in a live setting. We use empirical data from a learnersourcing system called RiPPLE to evaluate our approach. Results demonstrate that the proposed approach can propagate reliability and utilise the limited availability of instructors in spot-checking to improve the accuracy of the model compared to baseline models and the current model used in the system.
引用
收藏
页码:139 / 150
页数:12
相关论文
共 49 条
[1]   Assessing the quality of a student-generated question repository [J].
Bates, Simon P. ;
Galloway, Ross K. ;
Riise, Jonathan ;
Homer, Danny .
PHYSICAL REVIEW SPECIAL TOPICS-PHYSICS EDUCATION RESEARCH, 2014, 10 (02)
[2]   A multiresolution diffused expectation-maximization algorithm for medical image segmentation [J].
Boccignone, Giuseppe ;
Napoletano, Paolo ;
Caggiano, Vittorio ;
Ferraro, Mario .
COMPUTERS IN BIOLOGY AND MEDICINE, 2007, 37 (01) :83-96
[3]  
Bradley PS, 2000, INT C PATT RECOG, P76, DOI 10.1109/ICPR.2000.906021
[4]  
Chen Guangyong., 2017, INT C MACH LEARN, P787
[5]   Identification of robust Gaussian Process Regression with noisy input using EM algorithm [J].
Daemi, Atefeh ;
Aipoluri, Yousef ;
Huang, Biao .
CHEMOMETRICS AND INTELLIGENT LABORATORY SYSTEMS, 2019, 191 :1-11
[6]   Utilising Learnersourcing to Inform Design Loop Adaptivity [J].
Darvishi, Ali ;
Khosravi, Hassan ;
Sadiq, Shazia .
ADDRESSING GLOBAL CHALLENGES AND QUALITY EDUCATION, EC-TEL 2020, 2020, 12315 :332-346
[7]   CrowdGrader: A Tool For Crowdsourcing the Evaluation of Homework Assignments [J].
de Alfaro, Luca ;
Shavlovsky, Michael .
PROCEEDINGS OF THE 45TH ACM TECHNICAL SYMPOSIUM ON COMPUTER SCIENCE EDUCATION (SIGCSE'14), 2014, :415-420
[8]  
Denny P., 2009, Proc. 11th Australasian Computing Education Conference, V95, P55
[9]  
Denny P, 2011, SIGCSE 11: PROCEEDINGS OF THE 42ND ACM TECHNICAL SYMPOSIUM ON COMPUTER SCIENCE EDUCATION, P471
[10]   ORIGINAL RESEARCH ARTICLE Collaborative learning with PeerWise [J].
Duret, Denis ;
Christley, Rob ;
Denny, Paul ;
Senior, Avril .
RESEARCH IN LEARNING TECHNOLOGY, 2018, 26 :1-13