Performing high-powered studies efficiently with sequential analyses

被引:269
作者
Lakens, Daniel [1 ]
机构
[1] Eindhoven Univ Technol, Human Technol Interact Grp, IPO 1-33,POB 513, NL-5600 MB Eindhoven, Netherlands
关键词
SAMPLE-SIZE; REPLICATION; BOUNDARIES; DESIGN;
D O I
10.1002/ejsp.2023
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Running studies with high statistical power, while effect size estimates in psychology are often inaccurate, leads to a practical challenge when designing an experiment. This challenge can be addressed by performing sequential analyses while the data collection is still in progress. At an interim analysis, data collection can be stopped whenever the results are convincing enough to conclude that an effect is present, more data can be collected, or the study can be terminated whenever it is extremely unlikely that the predicted effect will be observed if data collection would be continued. Such interim analyses can be performed while controlling the Type 1 error rate. Sequential analyses can greatly improve the efficiency with which data are collected. Additional flexibility is provided by adaptive designs where sample sizes are increased on the basis of the observed effect size. The need for pre-registration, ways to prevent experimenter bias, and a comparison between Bayesian approaches and null-hypothesis significance testing (NHST) are discussed. Sequential analyses, which are widely used in large-scale medical trials, provide an efficient way to perform high-powered informative experiments. I hope this introduction will provide a practical primer that allows researchers to incorporate sequential analyses in their research. Copyright (c) 2014 John Wiley & Sons, Ltd.
引用
收藏
页码:701 / 710
页数:10
相关论文
共 36 条
  • [1] REPEATED SIGNIFICANCE TESTS ON ACCUMULATING DATA
    ARMITAGE, P
    MCPHERSO.CK
    ROWE, BC
    [J]. JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES A-GENERAL, 1969, 132 : 235 - &
  • [2] EVALUATION OF EXPERIMENTS WITH ADAPTIVE INTERIM ANALYSES
    BAUER, P
    KOHNE, K
    [J]. BIOMETRICS, 1994, 50 (04) : 1029 - 1041
  • [3] Power failure: why small sample size undermines the reliability of neuroscience
    Button, Katherine S.
    Ioannidis, John P. A.
    Mokrysz, Claire
    Nosek, Brian A.
    Flint, Jonathan
    Robinson, Emma S. J.
    Munafo, Marcus R.
    [J]. NATURE REVIEWS NEUROSCIENCE, 2013, 14 (05) : 365 - 376
  • [4] Chow S., 2007, Sample size calculations in clinical research
  • [5] Adaptive design methods in clinical trials - a review
    Chow, Shein-Chung
    Chang, Mark
    [J]. ORPHANET JOURNAL OF RARE DISEASES, 2008, 3 (1)
  • [6] Cohen J., 1988, Statistical power analysis for the behavioral sciences, VSecond
  • [7] Cumming G., 2012, Understanding the New Statistics: Effect Sizes, Confidence Intervals, and Meta-analysys
  • [8] Replication and p Intervals p Values Predict the Future Only Vaguely, but Confidence Intervals Do Much Better
    Cumming, Geoff
    [J]. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE, 2008, 3 (04) : 286 - 300
  • [9] A method of sampling inspection
    Dodge, HF
    Romig, HG
    [J]. BELL SYSTEM TECHNICAL JOURNAL, 1929, 8 : 613 - 631
  • [10] Donnellan MB, EMOTION IN PRESS