Reviewer training for improving grant and journal peer review

被引:5
作者
Hesselberg, Jan-Ole [1 ,2 ]
Dalsbo, Therese K. [3 ]
Stromme, Hilde [4 ]
Svege, Ida [2 ,5 ]
Fretheim, Atle [5 ,6 ]
机构
[1] Univ Oslo, Dept Psychol, Oslo, Norway
[2] StiGelsen Dam, Oslo, Norway
[3] Natl Inst Occupat Hlth, Oslo, Norway
[4] Univ Oslo, Med Lib, Oslo, Norway
[5] Oslo Metropolitan Univ, Fac Hlth Sci, Oslo, Norway
[6] Norwegian Inst Publ Hlth, Ctr Epidem Intervent Res, Oslo, Norway
来源
COCHRANE DATABASE OF SYSTEMATIC REVIEWS | 2023年 / 11期
关键词
Bias; Checklist; Peer Review; Research; Publishing; Reproducibility of Results; CRITICAL-APPRAISAL; QUALITY; BIAS; EDITORS; TRIAL;
D O I
10.1002/14651858.MR000056.pub2
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
Background Funders and scientific journals use peer review to decide which projects to fund or articles to publish. Reviewer training is an intervention to improve the quality of peer review. However, studies on the effects of such training yield inconsistent results, and there are no up-to-date systematic reviews addressing this question. Objectives To evaluate the effect of peer reviewer training on the quality of grant and journal peer review. Search methods We used standard, extensive Cochrane search methods. The latest search date was 27 April 2022. Selection criteria We included randomized controlled trials (RCTs; including cluster-RCTs) that evaluated peer review with training interventions versus usual processes, no training interventions, or other interventions to improve the quality of peer review. Data collection and analysis We used standard Cochrane methods. Our primary outcomes were 1. completeness of reporting and 2. peer review detection of errors. Our secondary outcomes were 1. bibliometric scores, 2. stakeholders' assessment of peer review quality, 3. inter-reviewer agreement, 4. process-centred outcomes, 5. peer reviewer satisfaction, and 6. completion rate and speed of funded projects. We used the first version of the Cochrane risk of bias tool to assess the risk of bias, and we used GRADE to assess the certainty of evidence. Main results We included 10 RCTs with a total of 1213 units of analysis. The unit of analysis was the individual reviewer in seven studies (722 reviewers in total), and the reviewed manuscript in three studies (491 manuscripts in total). In eight RCTs, participants were journal peer reviewers. In two studies, the participants were grant peer reviewers. The training interventions can be broadly divided into dialogue-based interventions (interactive workshop, face-to-face training, mentoring) and one-way communication (written information, video course, checklist, written feedback). Most studies were small. We found moderate-certainty evidence that emails reminding peer reviewers to check items of reporting checklists, compared with standard journal practice, have little or no effect on the completeness of reporting, measured as the proportion of items (from 0.00 to 1.00) that were adequately reported (mean difference (MD) 0.02, 95% confidence interval (CI) -0.02 to 0.06; 2 RCTs, 421 manuscripts). There was low-certainty evidence that reviewer training, compared with standard journal practice, slightly improves peer reviewer ability to detect errors (MD 0.55, 95% CI 0.20 to 0.90; 1 RCT, 418 reviewers). We found low-certainty evidence that reviewer training, compared with standard journal practice, has little or no effect on stakeholders' assessment of review quality in journal peer review (standardized mean difference (SMD) 0.13 standard deviations (SDs), 95% CI -0.07 to 0.33; 1 RCT, 418 reviewers), or change in stakeholders' assessment of review quality in journal peer review (SMD -0.15 SDs, 95% CI -0.39 to 0.10; 5 RCTs, 258 reviewers). We found very low-certainty evidence that a video course, compared with no video course, has little or no effect on inter-reviewer agreement in grant peer review (MD 0.14 points, 95% CI -0.07 to 0.35; 1 RCT, 75 reviewers). There was low-certainty evidence that structured individual feedback on scoring, compared with general information on scoring, has little or no effect on the change in inter-reviewer agreement in grant peer review (MD 0.18 points, 95% CI -0.14 to 0.50; 1 RCT, 41 reviewers, low-certainty evidence). Authors' conclusions Evidence from 10 RCTs suggests that training peer reviewers may lead to little or no improvement in the quality of peer review. There is a need for studies with more participants and a broader spectrum of valid and reliable outcome measures. Studies evaluating stakeholders' assessments of the quality of peer review should ensure that these instruments have sufficient levels of validity and reliability.
引用
收藏
页数:53
相关论文
共 75 条
  • [1] Peer Review of Grant Applications: Criteria Used and Qualitative Study of Reviewer Practices
    Abdoul, Hendy
    Perrey, Christophe
    Amiel, Philippe
    Tubach, Florence
    Gottot, Serge
    Durand-Zaleski, Isabelle
    Alberti, Corinne
    [J]. PLOS ONE, 2012, 7 (09):
  • [2] [Anonymous], 2017, Budget
  • [3] [Anonymous], 2022, RevMan Web 2022, Version 4.10.0
  • [4] [Anonymous], 2019, NCT03751878
  • [5] CONTRAST EFFECTS IN SEQUENTIAL DECISIONS: EVIDENCE FROM SPEED DATING
    Bhargava, Saurabh
    Fisman, Ray
    [J]. REVIEW OF ECONOMICS AND STATISTICS, 2014, 96 (03) : 444 - 457
  • [6] Effect of an editorial intervention to improve the completeness of reporting of randomised trials: a randomised controlled trial
    Blanco, David
    Schroter, Sara
    Aldcroft, Adrian
    Moher, David
    Boutron, Isabelle
    Kirkham, Jamie J.
    Cobo, Erik
    [J]. BMJ OPEN, 2020, 10 (05):
  • [7] The Matthew effect in science funding
    Bol, Thijs
    de Vaan, Mathijs
    van de Rijt, Arnout
    [J]. PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2018, 115 (19) : 4887 - 4890
  • [8] Potential sources of bias in research fellowship assessments: effects of university prestige and field of study
    Bornmann, Lutz
    Daniel, Hans-Dieter
    [J]. RESEARCH EVALUATION, 2006, 15 (03) : 209 - 219
  • [9] Scientific Peer Review
    Bornmann, Lutz
    [J]. ANNUAL REVIEW OF INFORMATION SCIENCE AND TECHNOLOGY, 2011, 45 : 199 - 245
  • [10] Impact of interventions to improve the quality of peer review of biomedical journals: a systematic review and meta-analysis
    Bruce, Rachel
    Chauvin, Anthony
    Trinquart, Ludovic
    Ravaud, Philippe
    Boutron, Isabelle
    [J]. BMC MEDICINE, 2016, 14