Evaluating technology enhanced learning by using single-case experimental design: A systematic review

被引:1
|
作者
Dayo, Nadira [1 ,2 ,3 ,4 ]
Metwaly, Sameh Said [1 ,2 ]
Van Den Noortgate, Wim [1 ,2 ]
机构
[1] Katholieke Univ Leuven, Fac Psychol & Educ Sci, Leuven, Belgium
[2] Katholieke Univ Leuven, ITEC Imec Res Grp, Leuven, Belgium
[3] Katholieke Univ Leuven, Fac Psychol & Educ Sci, KU Leuven campus Kulak Kortrijk,E Sabbelaan 51, B-8500 Kortrijk, Belgium
[4] Katholieke Univ Leuven, ITEC Imec Res Grp, KU Leuven campus Kulak Kortrijk,E Sabbelaan 51, B-8500 Kortrijk, Belgium
关键词
systematic review; single-case experimental design; technology-enhanced learning; AUTISM SPECTRUM DISORDERS; SUBJECT RESEARCH; CHILDREN; STUDENTS; INTERVENTIONS; TRENDS; TUTOR;
D O I
10.1111/bjet.13468
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
Single-case experimental designs (SCEDs) may offer a reliable and internally valid way to evaluate technology-enhanced learning (TEL). A systematic review was conducted to provide an overview of what, why and how SCEDs are used to evaluate TEL. Accordingly, 136 studies from nine databases fulfilling the inclusion criteria were included. The results showed that most of the studies were conducted in the field of special education focusing on evaluating the effectiveness of computer-assisted instructions, video prompts and mobile devices to improve language and communication, socio-emotional, skills and mental health. The research objective of most studies was to evaluate the effects of the intervention; often no specific justification for using SCED was provided. Additionally, multiple baseline and phase designs were the most common SCED types, with most measurements in the intervention phase. Frequent data collection methods were observation, tests, questionnaires and task analysis, whereas, visual and descriptive analysis were common methods for data analysis. Nearly half of the studies did not acknowledge any limitations, while a few mentioned generalization and small sample size as limitations. The review provides valuable insights into utilizing SCEDs to advance TEL evaluation methodology and concludes with a reflection on further opportunities that SCEDs can offer for evaluating TEL.Practitioner notesWhat is already known about this topicWhat this paper addsImplications for practice and/or policy SCEDs use multiple measurements to study a single participant over multiple conditions, in the absence and presence of an intervention SCEDs can be rigorous designs for evaluating behaviour change caused by any intervention, including for testing technology-based interventions. Reveals patterns, trends and gaps in the use of SCED for TEL. Identifies the study disciplines, EdTech tools and outcome variables studied using SCEDs. Provides a comprehensive understanding of how SCEDs are used to evaluate TEL by shedding light on methodological techniques. Enriches insights about justifications and limitations of using SCEDs for TEL. Informs about the use of the rigorous method, SCED, for evaluation of technology-driven interventions across various disciplines. Contributes therefore to the quality of an evidence base, which provides policymakers, and different stakeholders a consolidated resource to design, implement and decide about TEL.
引用
收藏
页码:2457 / 2477
页数:21
相关论文
共 50 条
  • [21] APPLYING QUALITY INDICATORS TO SINGLE-CASE RESEARCH DESIGNS USED IN SPECIAL EDUCATION: A SYSTEMATIC REVIEW
    Moeller, Jeremy D.
    Dattilo, John
    Rusch, Frank
    PSYCHOLOGY IN THE SCHOOLS, 2015, 52 (02) : 139 - 153
  • [22] A Systematic Review of Instructional Comparisons in Single-Case Research
    Ledford, Jennifer R.
    Chazin, Kate T.
    Gagnon, Kari L.
    Lord, Anne K.
    Turner, Virginia R.
    Zimmerman, Kathleen N.
    REMEDIAL AND SPECIAL EDUCATION, 2021, 42 (03) : 155 - 168
  • [23] QuantifyMe: An Automated Single-Case Experimental Design Platform
    Sano, Akane
    Taylor, Sara
    Ferguson, Craig
    Mohan, Akshay
    Picard, Rosalind W.
    WIRELESS MOBILE COMMUNICATION AND HEALTHCARE, 2018, 247 : 199 - 206
  • [24] Single-case design standards: An update and proposed upgrades
    Kratochwill, Thomas R.
    Horner, Robert H.
    Levin, Joel R.
    Machalicek, Wendy
    Ferron, John
    Johnson, Austin
    Collins, Tai
    JOURNAL OF SCHOOL PSYCHOLOGY, 2021, 89 : 91 - 105
  • [25] Meta-Analysis of Single-Case Experimental Design using Multilevel Modeling
    Baek, Eunkyeng
    Luo, Wen
    Lam, Kwok Hap
    BEHAVIOR MODIFICATION, 2023, 47 (06) : 1546 - 1573
  • [26] A brief systematic review of the single-case research on Prevent-Teach-Reinforce
    Gregori, Emily
    Lory, Catharine
    Smith, Sandy
    Drew, Christine
    INTERNATIONAL JOURNAL OF DEVELOPMENTAL DISABILITIES, 2024,
  • [27] Comparison of the What Works Clearinghouse Standards for Single-Case Research: Applications for Systematic Reviews
    Lory, Catharine
    Gregori, Emily
    BEHAVIORAL DISORDERS, 2024, : 34 - 45
  • [28] Social validity in single-case research: A systematic literature review of prevalence and application
    Snodgrass, Melinda R.
    Chung, Moon Y.
    Meadan, Hedda
    Halle, James W.
    RESEARCH IN DEVELOPMENTAL DISABILITIES, 2018, 74 : 160 - 173
  • [29] Consistency in Single-Case ABAB Phase Designs: A Systematic Review
    Tanious, Rene
    De, Tamal Kumar
    Michiels, Bart
    van den Noortgate, Wim
    Onghena, Patrick
    BEHAVIOR MODIFICATION, 2023, 47 (06) : 1377 - 1406
  • [30] Characteristics of Moderators in Meta-Analyses of Single-Case Experimental Design Studies
    Moeyaert, Mariola
    Yang, Panpan
    Xu, Xinyun
    Kim, Esther
    BEHAVIOR MODIFICATION, 2023, 47 (06) : 1510 - 1545