Estimating publication bias in meta-analyses of peer-reviewed studies: A meta-meta-analysis across disciplines and journal tiers

被引:33
作者
Mathur, Maya B. [1 ]
VanderWeele, Tyler J. [2 ]
机构
[1] Stanford Univ, Quantitat Sci Unit, Palo Alto, CA 94304 USA
[2] Harvard TH Chan Sch Publ Hlth, Dept Epidemiol, Boston, MA USA
基金
美国国家卫生研究院;
关键词
meta‐ analysis; publication bias; reproducibility; scientific method; selective reporting; ROBUST VARIANCE-ESTIMATION; EFFECT SIZE; P VALUES; PREVALENCE; TESTS; POWER;
D O I
10.1002/jrsm.1464
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Selective publication and reporting in individual papers compromise the scientific record, but are meta-analyses as compromised as their constituent studies? We systematically sampled 63 meta-analyses (each comprising at least 40 studies) in PLoS One, top medical journals, top psychology journals, and Metalab, an online, open-data database of developmental psychology meta-analyses. We empirically estimated publication bias in each, including only the peer-reviewed studies. Across all meta-analyses, we estimated that "statistically significant" results in the expected direction were only 1.17 times more likely to be published than "nonsignificant" results or those in the unexpected direction (95% CI: [0.93, 1.47]), with a confidence interval substantially overlapping the null. Comparable estimates were 0.83 for meta-analyses in PLoS One, 1.02 for top medical journals, 1.54 for top psychology journals, and 4.70 for Metalab. The severity of publication bias did differ across individual meta-analyses; in a small minority (10%; 95% CI: [2%, 21%]), publication bias appeared to favor "significant" results in the expected direction by more than threefold. We estimated that for 89% of meta-analyses, the amount of publication bias that would be required to attenuate the point estimate to the null exceeded the amount of publication bias estimated to be actually present in the vast majority of meta-analyses from the relevant scientific discipline (exceeding the 95th percentile of publication bias). Study-level measures ("statistical significance" with a point estimate in the expected direction and point estimate size) did not indicate more publication bias in higher-tier versus lower-tier journals, nor in the earliest studies published on a topic versus later studies. Overall, we conclude that the mere act of performing a meta-analysis with a large number of studies (at least 40) and that includes non-headline results may largely mitigate publication bias in meta-analyses, suggesting optimism about the validity of meta-analytic results.
引用
收藏
页码:176 / 191
页数:16
相关论文
共 65 条
  • [1] Estimating the reproducibility of psychological science
    Aarts, Alexander A.
    Anderson, Joanna E.
    Anderson, Christopher J.
    Attridge, Peter R.
    Attwood, Angela
    Axt, Jordan
    Babel, Molly
    Bahnik, Stepan
    Baranski, Erica
    Barnett-Cowan, Michael
    Bartmess, Elizabeth
    Beer, Jennifer
    Bell, Raoul
    Bentley, Heather
    Beyan, Leah
    Binion, Grace
    Borsboom, Denny
    Bosch, Annick
    Bosco, Frank A.
    Bowman, Sara D.
    Brandt, Mark J.
    Braswell, Erin
    Brohmer, Hilmar
    Brown, Benjamin T.
    Brown, Kristina
    Bruening, Jovita
    Calhoun-Sauls, Ann
    Callahan, Shannon P.
    Chagnon, Elizabeth
    Chandler, Jesse
    Chartier, Christopher R.
    Cheung, Felix
    Christopherson, Cody D.
    Cillessen, Linda
    Clay, Russ
    Cleary, Hayley
    Cloud, Mark D.
    Cohn, Michael
    Cohoon, Johanna
    Columbus, Simon
    Cordes, Andreas
    Costantini, Giulio
    Alvarez, Leslie D. Cramblet
    Cremata, Ed
    Crusius, Jan
    DeCoster, Jamie
    DeGaetano, Michelle A.
    Della Penna, Nicolas
    den Bezemer, Bobby
    Deserno, Marie K.
    [J]. SCIENCE, 2015, 349 (6251)
  • [2] Andrews I, 2017, TECHNICAL REPORT
  • [3] Promoting Replicability in Developmental Research Through Meta-analyses: Insights From Language Acquisition Research
    Bergmann, Christina
    Tsuji, Sho
    Piccinini, Page E.
    Lewis, Molly L.
    Braginsky, Mika
    Frank, Michael C.
    Cristia, Alejandrina
    [J]. CHILD DEVELOPMENT, 2018, 89 (06) : 1996 - 2009
  • [4] Development of infants' segmentation of words from native speech: a meta-analytic approach
    Bergmann, Christina
    Cristia, Alejandrina
    [J]. DEVELOPMENTAL SCIENCE, 2016, 19 (06) : 901 - 917
  • [5] Black A., 2017, 39th Annual Meeting of the Cognitive Science Society, P124
  • [6] Star Wars: The Empirics Strike Back
    Brodeur, Abel
    Le, Mathias
    Sangnier, Marc
    Zylberberg, Yanos
    [J]. AMERICAN ECONOMIC JOURNAL-APPLIED ECONOMICS, 2016, 8 (01) : 1 - 32
  • [7] Power failure: why small sample size undermines the reliability of neuroscience
    Button, Katherine S.
    Ioannidis, John P. A.
    Mokrysz, Claire
    Nosek, Brian A.
    Flint, Jonathan
    Robinson, Emma S. J.
    Munafo, Marcus R.
    [J]. NATURE REVIEWS NEUROSCIENCE, 2013, 14 (05) : 365 - 376
  • [8] Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015
    Camerer, Colin F.
    Dreber, Anna
    Holzmeister, Felix
    Ho, Teck-Hua
    Huber, Juegen
    Johannesson, Magnus
    Kirchler, Michael
    Nave, Gideon
    Nosek, Brian A.
    Pfeiffer, Thomas
    Altmejd, Adam
    Buttrick, Nick
    Chan, Taizan
    Chen, Yiling
    Forsell, Eskil
    Gampa, Anup
    Heikensten, Emma
    Hummer, Lily
    Imai, Taisuke
    Isaksson, Siri
    Manfredi, Dylan
    Rose, Julia
    Wagenmakers, Eric-Jan
    Wu, Hang
    [J]. NATURE HUMAN BEHAVIOUR, 2018, 2 (09): : 637 - 644
  • [9] Evolution of Reporting P Values in the Biomedical Literature, 1990-2015
    Chavalarias, David
    Wallach, Joshua David
    Li, Alvin Ho Ting
    Ioannidis, John P. A.
    [J]. JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION, 2016, 315 (11): : 1141 - 1148
  • [10] Coburn K, 2018, THESIS U CALIFORNIA