The future of online testing and assessment: question quality in MOOCs

被引:0
作者
Eamon Costello
Jane Holland
Colette Kirwan
机构
[1] Dublin City University,
[2] Royal College of Sugeons in Ireland,undefined
来源
International Journal of Educational Technology in Higher Education | / 15卷
关键词
MOOCs; Educational futures; Tests; Quality; Multiple choice questions; Assessment;
D O I
暂无
中图分类号
学科分类号
摘要
If MOOCs are to play a role in the future of higher education it is imperative that we critically examine how they are currently functioning. In particular, questions persist about the role MOOCs will play in the future of formal accredited learning. As the focus turns from informal and free to formal, accredited and paid, greater scrutiny will be brought to bear on the quality of the courses themselves. Although there have been some empirical studies into the quality of MOOCs, a notable gap exists in that such research has not examined Multiple Choice Questions (MCQs) which are a key component of much MOOC assessment and testing. Previous research suggests that flawed MCQ items may compromise the reliability and validity of these assessments, potentially leading to inconsistent outcomes for students. This study was hence designed to examine MCQ quality in MOOCs. 204 MCQs were analysed, from a selection of 18 MOOCs, sampling the domains of computing, social science and health sciences. Over 50% of MCQs (112) contained at least one item flaw; 57 MCQs contained multiple flaws. A large proportion of MOOC MCQs violated item-writing guidelines, which is comparable with previous studies examining the prevalence of flaws in assessments in more traditional educational contexts. The problem of low quality MCQs can be ameliorated by appropriate faculty training and pre- and post-test quality checks. These activities are essential if MOOCs are to become a force that can enable enhanced and improved pedagogies in the future of higher education, instead of simply proceeding to replicate existing poor practices at scale.
引用
收藏
相关论文
共 104 条
  • [1] Admiraal W(2015)Assessment in massive open online courses Electronic Journal of E-learning 13 207-216
  • [2] Huisman B(2014)Self-and peer assessment in massive open online courses International Journal of Higher Education 3 119-128
  • [3] Pilli O(2013)Assessing writing in MOOCs: Automated essay scoring and calibrated peer review™ Research & Practice in Assessment 8 40-48
  • [4] Admiraal W(2018)Integrating MOOCs in traditionally taught courses: Achieving learning outcomes with blended learning International Journal of Educational Technology in Higher Education 15 2-115
  • [5] Huisman B(2015)A strategic response to MOOCs: How one European university is approaching the challenge The International Review of Research in Open and Distributed Learning 16 98-238
  • [6] Van De Ven M(2011)Web-based application to eliminate five contraindicated multiple-choice question practices Evaluation & the Health Professions 34 226-S6
  • [7] Balfour SP(1994)The use of imprecise terms in examination questions: How frequent is frequently? Academic Medicine 69 S4-117
  • [8] Bralić A(2012)Making sense of MOOCs: Musings in a maze of myth, paradox and possibility Journal of Interactive Media in Education 3 18-130
  • [9] Divjak B(2010)A primer on classical test theory and item response theory for assessments in medical education Medical Education 44 109-702
  • [10] Brown M(2015)Beyond objective testing and peer assessment: Alternative ways of assessment in MOOCs International Journal of Educational Technology in Higher Education 12 119-241