Correcting for the Multiplicative and Additive Effects of Measurement Unreliability in Meta-Analysis of Correlations

被引:1
|
作者
Ke, Zijun [1 ]
Tong, Xin [2 ]
机构
[1] Sun Yat Sen Univ, Guangdong Prov Key Lab Social Cognit Neurosci & M, Dept Psychol, Guangzhou, Peoples R China
[2] Univ Virginia, Dept Psychol, 102 Gilmer Hall, Charlottesville, VA 22903 USA
基金
中国国家自然科学基金;
关键词
unreliability correction; meta-analysis; residual correlation; common method bias; common method variance; STATISTICAL POWER; EFFECT SIZES; STRUCTURAL PARAMETERS; DISCRIMINANT VALIDITY; LIFE SATISFACTION; PUBLICATION BIAS; JUDGMENT CALLS; MEDIATING ROLE; SAMPLE-SIZE; PERSONALITY;
D O I
10.1037/met0000396
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
As a powerful tool for synthesizing information from multiple studies, meta-analysis has gained high popularity in many disciplines. Conclusions stemming from meta-analyses are often used to direct theory development, calibrate sample size planning, and guide critical decision-making and policymaking. However, meta-analyses can be conflicted, misleading, and irreproducible. One of the reasons for meta-analyses to be misleading is the improper handling of measurement unreliability. We show that even when there is no publication bias, the current meta-analysis procedures would frequently detect nonexistent effects, and provide severely biased estimates and intervals with coverage rates far below the intended level. In this study, an effective approach to correcting for unreliability is proposed and evaluated via simulation studies. Its sensitivity to the violation of the homogeneous reliability and residual correlation assumption is also tested. The proposed method is illustrated using a real meta-analysis on the relationship between extroversion and subjective well-being. Substantial differences in meta-analytic results are observed between the proposed method and existing methods. Further, although not specifically designed for aggregating effect sizes with various measures, the proposed method can be used to fulfill the purpose. The study ends with discussions on the limitations and guidelines for implementing the proposed approach. Translational Abstract Measurement unreliability refers to the overall inconsistency of a measure. Statistically, it indicates the amount of error in the observed scores of a measure. Measurement unreliability can distort the meta-analytic results of correlation in a multiplicative way as well as in an additive way. The multiplicative effect of unreliability produces substantial downward biases in the meta-analytic correlations, whereas the additive effect can bring in both downward and upward biases. Both effects can lead to quantitatively false conclusions, for example, detecting nonexistent effects and providing intervals with coverage rates far below the intended level. Existing correction methods only consider the multiplicative effect of unreliability. In this study, we propose a new unreliability correction method that can take into account both the multiplicative and the additive effects of unreliability. The new correction method also provides a tool for researchers to aggregate correlations based on different measures. We illustrate the proposed method using a real meta-analysis on the relationship between extroversion and subjective well-being, and evaluate it using Monte Carlo simulation studies. Limitations and practical guidance are included. R markdown files including snippets of embedded R code, annotations, and results for all analyses are provided.
引用
收藏
页码:21 / 38
页数:18
相关论文
共 50 条
  • [1] Correcting for Differences in Measurement Unreliability in Meta-Analysis of Variances
    Jansen, Katrin
    Nestler, Steffen
    MULTIVARIATE BEHAVIORAL RESEARCH, 2025,
  • [2] Correcting Bias in the Meta-Analysis of Correlations
    Stanley, T. D.
    Doucouliagos, Hristos
    Maier, Maximilian
    Bartos, Frantisek
    PSYCHOLOGICAL METHODS, 2024,
  • [3] A Meta-Analysis of the Interactive, Additive, and Relative Effects of Cognitive Ability and Motivation on Performance
    Van Iddekinge, Chad H.
    Aguinis, Herman
    Mackey, Jeremy D.
    DeOrtentiis, Philip S.
    JOURNAL OF MANAGEMENT, 2018, 44 (01) : 249 - 279
  • [4] A Bayesian "Fill-In" Method for Correcting for Publication Bias in Meta-Analysis
    Du, Han
    Liu, Fang
    Wang, Lijuan
    PSYCHOLOGICAL METHODS, 2017, 22 (04) : 799 - 817
  • [5] Effects of Spatial Training on Mathematics Performance: A Meta-Analysis
    Hawes, Zachary C. K.
    Gilligan-Lee, Katie A.
    Mix, Kelly S.
    DEVELOPMENTAL PSYCHOLOGY, 2022, 58 (01) : 112 - 137
  • [6] A Bayesian approach for correcting exposure misclassification in meta-analysis
    Lian, Qinshu
    Hodges, James S.
    MacLehose, Richard
    Chu, Haitao
    STATISTICS IN MEDICINE, 2019, 38 (01) : 115 - 130
  • [7] Are We Correcting Correctly?: Interdependence of Reliabilities in Meta-Analysis
    Koehler, Tine
    Cortina, Jose M.
    Kurtessis, James N.
    Goelz, Markus
    ORGANIZATIONAL RESEARCH METHODS, 2015, 18 (03) : 355 - 428
  • [8] Detecting and correcting for publication bias in meta-analysis - A truncated normal distribution approach
    Zhu, Qiaohao
    Carriere, K. C.
    STATISTICAL METHODS IN MEDICAL RESEARCH, 2018, 27 (09) : 2722 - 2741
  • [9] The Effects of Mindfulness Meditation: A Meta-Analysis
    Eberth, Juliane
    Sedlmeier, Peter
    MINDFULNESS, 2012, 3 (03) : 174 - 189
  • [10] Winner and loser effects: a meta-analysis
    Yan, Janice L.
    Smith, Noah M. T.
    Filice, David C. S.
    Dukas, Reuven
    ANIMAL BEHAVIOUR, 2024, 216 : 15 - 22