Minimum important difference is minimally important in sample size calculations

被引:12
作者
Wong, Hubert [1 ]
机构
[1] Univ British Columbia, Sch Populat & Publ Hlth, 2206 East Mall, Vancouver, BC V6T 1Z3, Canada
关键词
Clinical trial; Power; Effect size; Assumed benefit; TRIALS; IMPACT;
D O I
10.1186/s13063-023-07092-8
中图分类号
R-3 [医学研究方法]; R3 [基础医学];
学科分类号
1001 ;
摘要
Performing a sample size calculation for a randomized controlled trial requires specifying an assumed benefit (that is, the mean improvement in outcomes due to the intervention) and a target power. There is a widespread belief that judgments about the minimum important difference should be used when setting the assumed benefit and thus the sample size. This belief is misguided - when the purpose of the trial is to test the null hypothesis of no treatment benefit, the only role that the minimum important difference should be given is in determining whether the sample size should be zero, that is, whether the trial should be conducted at all.The true power of the trial depends on the true benefit, so the calculated sample size will result in a true power close to the target power used in the calculation only if the assumed benefit is close to the true benefit. Hence, the assumed benefit should be set to a value that is considered a realistic estimate of the true benefit. If a trial designed using a realistic value for the assumed benefit is unlikely to demonstrate that a meaningful benefit exists, the trial should not be conducted. Any attempt to reconcile discrepancies between the realistic estimate of benefit and the minimum important difference when setting the assumed benefit merely conflates a valid sample size calculation with one based on faulty inputs and leads to a true power that fails to match the target power.When calculating sample size, trial designers should focus efforts on determining reasonable estimates of the true benefit, not on what magnitude of benefit is judged important.
引用
收藏
页数:4
相关论文
共 7 条
  • [1] What are the implications of optimism bias in clinical research?
    Chalmers, I
    Matthews, R
    [J]. LANCET, 2006, 367 (9509) : 449 - 450
  • [2] The role of the minimum clinically important difference and its impact on designing a trial
    Chuang-Stein, Christy
    Kirby, Simon
    Hirsch, Ian
    Atkinson, Gary
    [J]. PHARMACEUTICAL STATISTICS, 2011, 10 (03) : 250 - 256
  • [3] Cook JA, 2018, BMJ-BRIT MED J, V363, DOI [10.1136/bmj.k3750, 10.1186/s13063-018-2884-0]
  • [4] Assessing methods to specify the target difference for a randomised controlled trial: DELTA (Difference ELicitation in TriAls) review
    Cook, Jonathan A.
    Hislop, Jennifer
    Adewuyi, Temitope E.
    Harrild, Kirsten
    Altman, Douglas G.
    Ramsay, Craig R.
    Fraser, Cynthia
    Buckley, Brian
    Fayers, Peter
    Harvey, Ian
    Briggs, Andrew H.
    Norrie, John D.
    Fergusson, Dean
    Ford, Ian
    Vale, Luke D.
    [J]. HEALTH TECHNOLOGY ASSESSMENT, 2014, 18 (28) : 1 - +
  • [5] Sample size calculation for clinical trials: the impact of clinician beliefs
    Fayers, PM
    Cuschieri, A
    Fielding, J
    Craven, J
    Uscinska, B
    Freedman, LS
    [J]. BRITISH JOURNAL OF CANCER, 2000, 82 (01) : 213 - 219
  • [6] Caution regarding the use of pilot studies to guide power calculations for study proposals
    Kraemer, HC
    Mintz, J
    Noda, A
    Tinklenberg, J
    Yesavage, JA
    [J]. ARCHIVES OF GENERAL PSYCHIATRY, 2006, 63 (05) : 484 - 489
  • [7] The Nonuse, Misuse, and Proper Use of Pilot Studies in Experimental Evaluation Research
    Westlund, Erik
    Stuart, Elizabeth A.
    [J]. AMERICAN JOURNAL OF EVALUATION, 2017, 38 (02) : 246 - 261