Automated confidence ranked classification of randomized controlled trial articles: an aid to evidence-based medicine

被引:33
作者
Cohen, Aaron M. [1 ]
Smalheiser, Neil R. [2 ]
McDonagh, Marian S. [1 ]
Yu, Clement [3 ]
Adams, Clive E. [4 ]
Davis, John M. [2 ]
Yu, Philip S. [3 ]
机构
[1] Oregon Hlth & Sci Univ, Dept Med Informat & Clin Epidemiol, Portland, OR 97239 USA
[2] Univ Illinois, Dept Psychiat, Chicago, IL 60612 USA
[3] Univ Illinois, Dept Comp Sci, Chicago, IL 60612 USA
[4] Univ Nottingham, Div Psychiat, Nottingham NG7 2RD, England
基金
美国国家卫生研究院;
关键词
Support Vector Machines; Natural Language Processing; Randomized Controlled Trials as Topic; Evidence-Based Medicine; Systematic Reviews; Information Retrieval; SYSTEMATIC REVIEWS; RETRIEVAL; WORKLOAD; MEDLINE; UPDATE;
D O I
10.1093/jamia/ocu025
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Objective: For many literature review tasks, including systematic review (SR) and other aspects of evidence-based medicine, it is important to know whether an article describes a randomized controlled trial (RCT). Current manual annotation is not complete or flexible enough for the SR process. In this work, highly accurate machine learning predictive models were built that include confidence predictions of whether an article is an RCT. Materials and Methods: The LibSVM classifier was used with forward selection of potential feature sets on a large human-related subset of MEDLINE to create a classification model requiring only the citation, abstract, and MeSH terms for each article. Results: The model achieved an area under the receiver operating characteristic curve of 0.973 and mean squared error of 0.013 on the held out year 2011 data. Accurate confidence estimates were confirmed on a manually reviewed set of test articles. A second model not requiring MeSH terms was also created, and performs almost as well. Discussion: Both models accurately rank and predict article RCT confidence. Using the model and the manually reviewed samples, it is estimated that about 8000 (3%) additional RCTs can be identified in MEDLINE, and that 5% of articles tagged as RCTs in Medline may not be identified. Conclusion: Retagging human-related studies with a continuously valued RCT confidence is potentially more useful for article ranking and review than a simple yes/no prediction. The automated RCT tagging tool should offer significant savings of time and effort during the process of writing SRs, and is a key component of a multistep text mining pipeline that we are building to streamline SR workflow. In addition, the model may be useful for identifying errors in MEDLINE publication types. The RCT confidence predictions described here have been made available to users as a web service with a user query form front end at: http://arrowsmith.psych.uic.edu/cgi-bin/arrowsmith_uic/RCT_Tagger.cgi.
引用
收藏
页码:707 / 717
页数:11
相关论文
共 44 条
  • [1] Automatically classifying sentences in full-text biomedical articles into Introduction, Methods, Results and Discussion
    Agarwal, Shashank
    Yu, Hong
    [J]. BIOINFORMATICS, 2009, 25 (23) : 3174 - 3180
  • [2] Preconception lifestyle advice for people with subfertility
    Anderson, Kirsty
    Norman, Robert J.
    Middleton, Philippa
    [J]. COCHRANE DATABASE OF SYSTEMATIC REVIEWS, 2010, (04):
  • [3] Text categorization models for high-quality article retrieval in internal medicine
    Aphinyanaphongs, Y
    Tsamardinos, I
    Statnikov, A
    Hardin, D
    Aliferis, CF
    [J]. JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, 2005, 12 (02) : 207 - 216
  • [4] A comparison of citation metrics to machine learning filters for the identification of high quality MEDLINE documents
    Aphinyanaphongs, Yindalon
    Statnikov, Alexander
    Aliferis, Constantin F.
    [J]. JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, 2006, 13 (04) : 446 - 455
  • [5] Feature Engineering and a Proposed Decision-Support System for Systematic Reviewers of Medical Evidence
    Bekhuis, Tanja
    Tseytlin, Eugene
    Mitchell, Kevin J.
    Demner-Fushman, Dina
    [J]. PLOS ONE, 2014, 9 (01):
  • [6] Screening nonrandomized studies for medical systematic reviews: A comparative study of classifiers
    Bekhuis, Tanja
    Demner-Fushman, Dina
    [J]. ARTIFICIAL INTELLIGENCE IN MEDICINE, 2012, 55 (03) : 197 - 207
  • [7] Collaborative information synthesis II: Recommendations for information systems to support synthesis activities
    Blake, Catherine
    Pratt, Wanda
    [J]. JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, 2006, 57 (14): : 1888 - 1895
  • [8] Sentence retrieval for abstracts of randomized controlled trials
    Chung, Grace Y.
    [J]. BMC MEDICAL INFORMATICS AND DECISION MAKING, 2009, 9
  • [9] Cohen A., 2010, 1st Int. Health Informatics Symposium, P376, DOI [DOI 10.1145/1882992.1883046, 10.1145/1882992, DOI 10.1145/1882992]
  • [10] Cohen Aaron M, 2006, AMIA Annu Symp Proc, P161