An Empirical Investigation of Variance Design Parameters for Planning Cluster-Randomized Trials of Science Achievement

被引:36
|
作者
Westine, Carl D. [1 ]
Spybrook, Jessaca [2 ]
Taylor, Joseph A. [3 ]
机构
[1] Western Michigan Univ, Kalamazoo, MI 49008 USA
[2] Western Michigan Univ, Dept Educ Leadership Res & Technol, Kalamazoo, MI 49008 USA
[3] BSCS, Colorado Springs, CO USA
基金
美国国家科学基金会;
关键词
intraclass correlation; science education; design parameters; cluster-randomized trials; hierarchical linear models; INTRA-CLASS CORRELATION; EDUCATION-PROGRAMS; STATISTICAL POWER; RANDOM ASSIGNMENT; PRECISION; INTERVENTIONS; SCHOOLS;
D O I
10.1177/0193841X14531584
中图分类号
C [社会科学总论];
学科分类号
03 ; 0303 ;
摘要
Background: Prior research has focused primarily on empirically estimating design parameters for cluster-randomized trials (CRTs) of mathematics and reading achievement. Little is known about how design parameters compare across other educational outcomes. Objectives: This article presents empirical estimates of design parameters that can be used to appropriately power CRTs in science education and compares them to estimates using mathematics and reading. Research Design: Estimates of intraclass correlations (ICCs) are computed for unconditional two-level (students in schools) and three-level (students in schools in districts) hierarchical linear models of science achievement. Relevant student- and school-level pretest and demographic covariates are then considered, and estimates of variance explained are computed. Subjects: Five consecutive years of Texas student-level data for Grades 5, 8, 10, and 11. Measures: Science, mathematics, and reading achievement raw scores as measured by the Texas Assessment of Knowledge and Skills. Results: Findings show that ICCs in science range from .172 to .196 across grades and are generally higher than comparable statistics in mathematics, .163-.172, and reading, .099-.156. When available, a 1-year lagged student-level science pretest explains the most variability in the outcome. The 1-year lagged school-level science pretest is the best alternative in the absence of a 1-year lagged student-level science pretest. Conclusion: Science educational researchers should utilize design parameters derived from science achievement outcomes.
引用
收藏
页码:490 / 519
页数:30
相关论文
共 50 条
  • [1] Science Outcome Design Parameters for Cluster-Randomized Trials Involving Teachers
    Shen, Zuchao
    Curran, Chris
    Fleming, Kandace
    Splett, Joni Williams
    Poekert, Philip
    Bratsch-Hines, Mary
    JOURNAL OF RESEARCH ON EDUCATIONAL EFFECTIVENESS, 2024,
  • [2] Estimation and Standardization of Variance Parameters for Planning Cluster-Randomized Trials: A Short Guide for Researchers
    Bulus, Metin
    Gocer Sahin, Sakine
    JOURNAL OF MEASUREMENT AND EVALUATION IN EDUCATION AND PSYCHOLOGY-EPOD, 2019, 10 (02): : 179 - 201
  • [3] Introduction to Three Special Issues on Design Parameter Values for Planning Cluster Randomized Trials in the Social Sciences
    Spybrook, Jessaca
    Kelcey, Benjamin
    EVALUATION REVIEW, 2016, 40 (06) : 491 - 499
  • [4] Design and analysis of cluster randomized trials
    Li, Wei
    Xie, Yanli
    Pham, Dung
    Dong, Nianbo
    Spybrook, Jessaca
    Kelcey, Benjamin
    ASIA PACIFIC EDUCATION REVIEW, 2024, 25 (03) : 685 - 701
  • [5] Introduction to Special Issue on Design Parameters for Cluster Randomized Trials in Education
    Spybrook, Jessaca
    EVALUATION REVIEW, 2013, 37 (06) : 435 - 444
  • [6] Multilevel Design Parameters to Plan Cluster-Randomized Intervention Studies on Student Achievement in Elementary and Secondary School
    Stallasch, Sophie E.
    Luedtke, Oliver
    Artelt, Cordula
    Brunner, Martin
    JOURNAL OF RESEARCH ON EDUCATIONAL EFFECTIVENESS, 2021, 14 (01) : 172 - 206
  • [7] Examining Design and Statistical Power for Planning Cluster Randomized Trials Aimed at Improving Student Science Achievement and Science Teacher Outcomes
    Zhang, Qi
    Spybrook, Jessaca
    Unlu, Fatih
    AERA OPEN, 2020, 6 (03)
  • [8] Design of cluster-randomized trials of quality improvement interventions aimed at medical care providers
    Glynn, Robert J.
    Brookhart, M. Alan
    Stedman, Margaret
    Avorn, Jerry
    Solomon, Daniel H.
    MEDICAL CARE, 2007, 45 (10) : S38 - S43
  • [9] Reliability, effect size, and responsiveness of health status measures in the design of randomized and cluster-randomized trials
    Diehr, P
    Chen, L
    Pattick, D
    Feng, ZD
    Yasui, Y
    CONTEMPORARY CLINICAL TRIALS, 2005, 26 (01) : 45 - 58
  • [10] The Relationship Among Design Parameters for Statistical Power Between Continuous and Binomial Outcomes in Cluster Randomized Trials
    Chan, Wendy
    PSYCHOLOGICAL METHODS, 2019, 24 (02) : 179 - 195