Evaluating topic model interpretability from a primary care physician perspective

被引:18
作者
Arnold, Corey W. [1 ]
Oh, Andrea [1 ]
Chen, Shawn [1 ]
Speier, William [1 ]
机构
[1] Univ Calif Los Angeles, Dept Radiol Sci, Med Imaging & Informat Grp, Los Angeles, CA 90024 USA
基金
美国国家卫生研究院;
关键词
Topic modeling; Primary care; Clinical reports; RECORD;
D O I
10.1016/j.cmpb.2015.10.014
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Background and objective: Probabilistic topic models provide an unsupervised method for analyzing unstructured text. These models discover semantically coherent combinations of words (topics) that could be integrated in a clinical automatic summarization system for primary care physicians performing chart review. However, the human interpretability of topics discovered from clinical reports is unknown. Our objective is to assess the coherence of topics and their ability to represent the contents of clinical reports from a primary care physician's point of view. Methods: Three latent Dirichlet allocation models (50 topics, 100 topics, and 150 topics) were fit to a large collection of clinical reports. Topics were manually evaluated by primary care physicians and graduate students. Wilcoxon Signed-Rank Tests for Paired Samples were used to evaluate differences between different topic models, while differences in performance between students and primary care physicians (PCPs) were tested using Mann-Whitney U tests for each of the tasks. Results: While the 150-topic model produced the best log likelihood, participants were most accurate at identifying words that did not belong in topics learned by the 100-topic model, suggesting that 100 topics provides better relative granularity of discovered semantic themes for the data set used in this study. Models were comparable in their ability to represent the contents of documents. Primary care physicians significantly outperformed students in both tasks. Conclusion: This work establishes a baseline of interpretability for topic models trained with clinical reports, and provides insights on the appropriateness of using topic models for informatics applications. Our results indicate that PCPs find discovered topics more coherent and representative of clinical reports relative to students, warranting further research into their use for automatic summarization. (C) 2015 Elsevier Ireland Ltd. All rights reserved.
引用
收藏
页码:67 / 75
页数:9
相关论文
共 40 条
  • [1] [Anonymous], 2009, NIPS
  • [2] [Anonymous], 2002, MALLET: A machine learning for language toolkit
  • [3] [Anonymous], 2008, P 24 C UNCERTAINTY A
  • [4] Arnold C., 2012, P 35 INT ACM SIGIR C
  • [5] Arnold Corey W, 2010, AMIA Annu Symp Proc, V2010, P26
  • [6] A CORRELATED TOPIC MODEL OF SCIENCE
    Blei, David M.
    Lafferty, John D.
    [J]. ANNALS OF APPLIED STATISTICS, 2007, 1 (01) : 17 - 35
  • [7] Blei DM, 2004, ADV NEUR IN, V16, P17
  • [8] Latent Dirichlet allocation
    Blei, DM
    Ng, AY
    Jordan, MI
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2003, 3 (4-5) : 993 - 1022
  • [9] Boyd-Graber J.L., 2007, P JOINT M C EMP METH
  • [10] Amazon's Mechanical Turk: A New Source of Inexpensive, Yet High-Quality, Data?
    Buhrmester, Michael
    Kwang, Tracy
    Gosling, Samuel D.
    [J]. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE, 2011, 6 (01) : 3 - 5