Evaluating the relationship between citation set size, team size and screening methods used in systematic reviews: a cross-sectional study

被引:4
作者
O'Hearn, Katie [1 ]
MacDonald, Cameron [2 ]
Tsampalieros, Anne [1 ]
Kadota, Leo [3 ]
Sandarage, Ryan [4 ]
Jayawarden, Supun Kotteduwa [4 ]
Datko, Michele [5 ]
Reynolds, John M. [6 ]
Bui, Thanh [7 ]
Sultan, Shagufta [8 ]
Sampson, Margaret [9 ]
Pratt, Misty [1 ]
Barrowman, Nick [1 ]
Nama, Nassr [10 ]
Page, Matthew [11 ]
McNally, James Dayre [1 ,3 ,12 ]
机构
[1] CHEO Res Inst, Ottawa, ON, Canada
[2] McMaster Univ, Sch Engn & Appl Sci, Hamilton, ON, Canada
[3] Univ Ottawa, Fac Med, Dept Pediat, Ottawa, ON, Canada
[4] Univ British Columbia, Fac Med, Vancouver, BC, Canada
[5] ECRI, ECRI Informat Ctr, Plymouth, Devon, England
[6] Univ Miami Miller, Sch Med, Calder Mem Lib, Miami, FL USA
[7] Univ Toronto, Fac Arts & Sci, Toronto, ON, Canada
[8] Hlth Canada, Therapeut Prod Directorate, Ottawa, ON, Canada
[9] CHEO, Lib Serv, Ottawa, ON, Canada
[10] Univ British Columbia, Fac Med, Dept Pediat, Vancouver, BC, Canada
[11] Monash Univ, Sch Publ Hlth & Prevent Med, Melbourne, Australia
[12] CHEO, Dept Pediat, 401 Smyth Rd, Ottawa, ON K1H 8L1, Canada
关键词
Systematic reviews; Scoping reviews; Crowdsourcing; Machine learning; RANDOMIZED CONTROLLED-TRIALS; DECONTAMINATION; N95;
D O I
10.1186/s12874-021-01335-5
中图分类号
R19 [保健组织与事业(卫生事业管理)];
学科分类号
摘要
Background: Standard practice for conducting systematic reviews (SRs) is time consuming and involves the study team screening hundreds or thousands of citations. As the volume of medical literature grows, the citation set sizes and corresponding screening efforts increase. While larger team size and alternate screening methods have the potential to reduce workload and decrease SR completion times, it is unknown whether investigators adapt team size or methods in response to citation set sizes. Using a cross-sectional design, we sought to understand how citation set size impacts (1) the total number of authors or individuals contributing to screening and (2) screening methods. Methods: MEDLINE was searched in April 2019 for SRs on any health topic. A total of 1880 unique publications were identified and sorted into five citation set size categories (after deduplication): < 1,000, 1,001-2,500, 2,501-5,000, 5,001-10,000, and > 10,000. A random sample of 259 SRs were selected (similar to 50 per category) for data extraction and analysis. Results: With the exception of the pairwise t test comparing the under 1000 and over 10,000 categories (median 5 vs. 6, p = 0.049) no statistically significant relationship was evident between author number and citation set size. While visual inspection was suggestive, statistical testing did not consistently identify a relationship between citation set size and number of screeners (title-abstract, full text) or data extractors. However, logistic regression identified investigators were significantly more likely to deviate from gold-standard screening methods (i.e. independent duplicate screening) with larger citation sets. For every doubling of citation size, the odds of using gold-standard screening decreased by 15 and 20% at title-abstract and full text review, respectively. Finally, few SRs reported using crowdsourcing (n = 2) or computer-assisted screening (n = 1). Conclusions: Large citation set sizes present a challenge to SR teams, especially when faced with time-sensitive health policy questions. Our study suggests that with increasing citation set size, authors are less likely to adhere to gold-standard screening methods. It is possible that adjunct screening methods, such as crowdsourcing (large team) and computer-assisted technologies, may provide a viable solution for authors to complete their SRs in a timely manner.
引用
收藏
页数:12
相关论文
共 51 条
  • [1] Aboukhalil R., 2014, WINNOWER, V7
  • [2] Epidemiology of systematic reviews in imaging journals: evaluation of publication trends and sustainability?
    Alabousi, M.
    Alabousi, A.
    McGrath, T. A.
    Cobey, K. D.
    Budhram, B.
    Frank, R. A.
    Nguyen, F.
    Salameh, J. P.
    Sharifabadi, A. Dehmoobad
    McInnes, M. D. F.
    [J]. EUROPEAN RADIOLOGY, 2019, 29 (02) : 517 - 526
  • [3] Publish Together or Perish
    Baethge, C.
    [J]. DEUTSCHES ARZTEBLATT INTERNATIONAL, 2008, 105 (20): : 380 - 383
  • [4] Seventy-Five Trials and Eleven Systematic Reviews a Day: How Will We Ever Keep Up?
    Bastian, Hilda
    Glasziou, Paul
    Chalmers, Iain
    [J]. PLOS MEDICINE, 2010, 7 (09):
  • [5] Growth rates of modern science: A bibliometric analysis based on the number of publications and cited references
    Bornmann, Lutz
    Mutz, Ruediger
    [J]. JOURNAL OF THE ASSOCIATION FOR INFORMATION SCIENCE AND TECHNOLOGY, 2015, 66 (11) : 2215 - 2222
  • [6] Using Crowdsourcing to Evaluate Published Scientific Literature: Methods and Example
    Brown, Andrew W.
    Allison, David B.
    [J]. PLOS ONE, 2014, 9 (07):
  • [7] Crowdsourcing a mixed systematic review on a complex topic and a heterogeneous population: Lessons learned
    Bujold, Mathieu
    Granikov, Vera
    El Sherif, Reem
    Pluye, Pierre
    [J]. EDUCATION FOR INFORMATION, 2018, 34 (04) : 293 - 300
  • [8] FLEISS JL, 1971, PSYCHOL BULL, V76, P378, DOI 10.1037/h0031619
  • [9] Single-reviewer abstract screening missed 13 percent of relevant studies: a crowd -based, randomized controlled trial
    Gartlehner, Gerald
    Affengruber, Lisa
    Titscher, Viktoria
    Noel-Storr, Anna
    Dooley, Gordon
    Ballarini, Nicolas
    Koenig, Franz
    [J]. JOURNAL OF CLINICAL EPIDEMIOLOGY, 2020, 121 : 20 - 28
  • [10] The semi-automation of title and abstract screening: a retrospective exploration of ways to leverage Abstrackr's relevance predictions in systematic and rapid reviews
    Gates, Allison
    Gates, Michelle
    Sebastianski, Meghan
    Guitard, Samantha
    Elliott, Sarah A.
    Hartling, Lisa
    [J]. BMC MEDICAL RESEARCH METHODOLOGY, 2020, 20 (01)