Reviewer training for improving grant and journal peer review

被引:5
作者
Hesselberg, Jan-Ole [1 ,2 ]
Dalsbo, Therese K. [3 ]
Stromme, Hilde [4 ]
Svege, Ida [2 ,5 ]
Fretheim, Atle [5 ,6 ]
机构
[1] Univ Oslo, Dept Psychol, Oslo, Norway
[2] StiGelsen Dam, Oslo, Norway
[3] Natl Inst Occupat Hlth, Oslo, Norway
[4] Univ Oslo, Med Lib, Oslo, Norway
[5] Oslo Metropolitan Univ, Fac Hlth Sci, Oslo, Norway
[6] Norwegian Inst Publ Hlth, Ctr Epidem Intervent Res, Oslo, Norway
来源
COCHRANE DATABASE OF SYSTEMATIC REVIEWS | 2023年 / 11期
关键词
Bias; Checklist; Peer Review; Research; Publishing; Reproducibility of Results; CRITICAL-APPRAISAL; QUALITY; BIAS; EDITORS; TRIAL;
D O I
10.1002/14651858.MR000056.pub2
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
Background Funders and scientific journals use peer review to decide which projects to fund or articles to publish. Reviewer training is an intervention to improve the quality of peer review. However, studies on the effects of such training yield inconsistent results, and there are no up-to-date systematic reviews addressing this question. Objectives To evaluate the effect of peer reviewer training on the quality of grant and journal peer review. Search methods We used standard, extensive Cochrane search methods. The latest search date was 27 April 2022. Selection criteria We included randomized controlled trials (RCTs; including cluster-RCTs) that evaluated peer review with training interventions versus usual processes, no training interventions, or other interventions to improve the quality of peer review. Data collection and analysis We used standard Cochrane methods. Our primary outcomes were 1. completeness of reporting and 2. peer review detection of errors. Our secondary outcomes were 1. bibliometric scores, 2. stakeholders' assessment of peer review quality, 3. inter-reviewer agreement, 4. process-centred outcomes, 5. peer reviewer satisfaction, and 6. completion rate and speed of funded projects. We used the first version of the Cochrane risk of bias tool to assess the risk of bias, and we used GRADE to assess the certainty of evidence. Main results We included 10 RCTs with a total of 1213 units of analysis. The unit of analysis was the individual reviewer in seven studies (722 reviewers in total), and the reviewed manuscript in three studies (491 manuscripts in total). In eight RCTs, participants were journal peer reviewers. In two studies, the participants were grant peer reviewers. The training interventions can be broadly divided into dialogue-based interventions (interactive workshop, face-to-face training, mentoring) and one-way communication (written information, video course, checklist, written feedback). Most studies were small. We found moderate-certainty evidence that emails reminding peer reviewers to check items of reporting checklists, compared with standard journal practice, have little or no effect on the completeness of reporting, measured as the proportion of items (from 0.00 to 1.00) that were adequately reported (mean difference (MD) 0.02, 95% confidence interval (CI) -0.02 to 0.06; 2 RCTs, 421 manuscripts). There was low-certainty evidence that reviewer training, compared with standard journal practice, slightly improves peer reviewer ability to detect errors (MD 0.55, 95% CI 0.20 to 0.90; 1 RCT, 418 reviewers). We found low-certainty evidence that reviewer training, compared with standard journal practice, has little or no effect on stakeholders' assessment of review quality in journal peer review (standardized mean difference (SMD) 0.13 standard deviations (SDs), 95% CI -0.07 to 0.33; 1 RCT, 418 reviewers), or change in stakeholders' assessment of review quality in journal peer review (SMD -0.15 SDs, 95% CI -0.39 to 0.10; 5 RCTs, 258 reviewers). We found very low-certainty evidence that a video course, compared with no video course, has little or no effect on inter-reviewer agreement in grant peer review (MD 0.14 points, 95% CI -0.07 to 0.35; 1 RCT, 75 reviewers). There was low-certainty evidence that structured individual feedback on scoring, compared with general information on scoring, has little or no effect on the change in inter-reviewer agreement in grant peer review (MD 0.18 points, 95% CI -0.14 to 0.50; 1 RCT, 41 reviewers, low-certainty evidence). Authors' conclusions Evidence from 10 RCTs suggests that training peer reviewers may lead to little or no improvement in the quality of peer review. There is a need for studies with more participants and a broader spectrum of valid and reliable outcome measures. Studies evaluating stakeholders' assessments of the quality of peer review should ensure that these instruments have sufficient levels of validity and reliability.
引用
收藏
页数:53
相关论文
共 75 条
  • [71] Defining the role of cognitive distance in the peer review process with an explorative study of a grant scheme in infection biology
    Wang, Qi
    Sandstrom, Ulf
    [J]. RESEARCH EVALUATION, 2015, 24 (03) : 271 - 281
  • [72] Ware M., 2015, The STM Report: An Overview of Scientific and Scholarly Journal Publishing
  • [73] Rewarding reviewers - sense or sensibility? A Wiley study explained
    Warne, Verity
    [J]. LEARNED PUBLISHING, 2016, 29 (01) : 41 - 50
  • [74] Mentored peer review of standardized manuscripts as a teaching tool for residents: a pilot randomized controlled multi-center study
    Victoria S. S. Wong
    Roy E. Strowd
    Rebeca Aragón-García
    Yeseon Park Moon
    Blair Ford
    Sheryl R. Haut
    Joseph S. Kass
    Zachary N. London
    MaryAnn Mays
    Tracey A. Milligan
    Raymond S. Price
    Patrick S. Reynolds
    Linda M. Selwa
    David C. Spencer
    Mitchell S. V. Elkind
    [J]. Research Integrity and Peer Review, 2 (1)
  • [75] Methods for obtaining unpublished data
    Young, Taryn
    Hopewell, Sally
    [J]. COCHRANE DATABASE OF SYSTEMATIC REVIEWS, 2011, (11):