De-identification of clinical narratives through writing complexity measures

被引:14
作者
Li, Muqun [1 ]
Carrell, David [2 ]
Aberdeen, John [3 ]
Hirschman, Lynette [3 ]
Malin, Bradley A. [1 ,4 ]
机构
[1] Vanderbilt Univ, Dept Elect Engn & Comp Sci, Nashville, TN 37235 USA
[2] Grp Hlth Res Inst, Seattle, WA USA
[3] Mitre Corp, Bedford, MA 01730 USA
[4] Vanderbilt Univ, Dept Biomed Informat, Nashville, TN 37235 USA
基金
美国国家科学基金会;
关键词
Electronic medical records; Privacy; Natural language processing; OF-THE-ART; MEDICAL-RECORDS; TEXT; VALIDATION; DOCUMENTS; SYSTEM;
D O I
10.1016/j.ijmedinf.2014.07.002
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Purpose: Electronic health records contain a substantial quantity of clinical narrative, which is increasingly reused for research purposes. To share data on a large scale and respect privacy, it is critical to remove patient identifiers. De-identification tools based on machine learning have been proposed; however, model training is usually based on either a random group of documents or a pre-existing document type designation (e.g., discharge summary). This work investigates if inherent features, such as the writing complexity, can identify document subsets to enhance de-identification performance. Methods: We applied an unsupervised clustering method to group two corpora based on writing complexity measures: a collection of over 4500 documents of varying document types (e.g., discharge summaries, history and physical reports, and radiology reports) from Vanderbilt University Medical Center (VUMC) and the publicly available i2b2 corpus of 889 discharge summaries. We compare the performance (via recall, precision, and F-measure) of de-identification models trained on such clusters with models trained on documents grouped randomly or VUMC document type. Results: For the Vanderbilt dataset, it was observed that training and testing de-identification models on the same stylometric cluster (with the average F-measure of 0.917) tended to outperform models based on clusters of random documents (with an average F-measure of 0.881). It was further observed that increasing the size of a training subset sampled from a specific cluster could yield improved results (e.g., for subsets from a certain stylometric cluster, the F-measure raised from 0.743 to 0.841 when training size increased from 10 to 50 documents, and the F-measure reached 0.901 when the size of the training subset reached 200 documents). For the i2b2 dataset, training and testing on the same clusters based on complexity measures (average F-score 0.966) did not significantly surpass randomly selected clusters (average F-score 0.965). Conclusions: Our findings illustrate that, in environments consisting of a variety of clinical documentation, de-identification models trained on writing complexity measures are better than models trained on random groups and, in many instances, document types. (C) 2014 Elsevier Ireland Ltd. All rights reserved.
引用
收藏
页码:750 / 767
页数:18
相关论文
共 52 条
[1]   The MITRE Identification Scrubber Toolkit: Design, training, and assessment [J].
Aberdeen, John ;
Bayer, Samuel ;
Yeniterzi, Reyyan ;
Wellner, Ben ;
Clark, Cheryl ;
Hanauer, David ;
Malin, Bradley ;
Hirschman, Lynette .
INTERNATIONAL JOURNAL OF MEDICAL INFORMATICS, 2010, 79 (12) :849-859
[2]  
Adler-Milstein J, 2011, AM J MANAG CARE, V17, P761
[3]  
ANDERSON J, 1983, J READING, V26, P490
[4]  
[Anonymous], 2004, Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications (NLPBA), DOI 10.3115/1567594.1567618
[5]  
[Anonymous], 1963, MEASUREMENT READABIL
[6]  
[Anonymous], 2000, STANDARDS PRIVACY IN
[7]  
[Anonymous], 2001, PROC 18 INT C MACH L
[8]  
Aronson AR, 2001, J AM MED INFORM ASSN, P17
[9]  
Aston G., 2004, Corpora and language learners
[10]   The Unified Medical Language System (UMLS): integrating biomedical terminology [J].
Bodenreider, O .
NUCLEIC ACIDS RESEARCH, 2004, 32 :D267-D270