High-Throughput Classification of Radiographs Using Deep Convolutional Neural Networks

被引:0
作者
Alvin Rajkomar
Sneha Lingam
Andrew G. Taylor
Michael Blum
John Mongan
机构
[1] University of California,Department of Medicine, Division of Hospital Medicine
[2] San Francisco,Center for Digital Health Innovation
[3] University of California,Department of Radiology and Biomedical Imaging
[4] San Francisco,undefined
[5] University of California,undefined
[6] San Francisco,undefined
来源
Journal of Digital Imaging | 2017年 / 30卷
关键词
Radiography; Chest radiographs; Machine learning; Artificial neural networks; Computer vision; Deep learning; Convolutional neural network;
D O I
暂无
中图分类号
学科分类号
摘要
The study aimed to determine if computer vision techniques rooted in deep learning can use a small set of radiographs to perform clinically relevant image classification with high fidelity. One thousand eight hundred eighty-five chest radiographs on 909 patients obtained between January 2013 and July 2015 at our institution were retrieved and anonymized. The source images were manually annotated as frontal or lateral and randomly divided into training, validation, and test sets. Training and validation sets were augmented to over 150,000 images using standard image manipulations. We then pre-trained a series of deep convolutional networks based on the open-source GoogLeNet with various transformations of the open-source ImageNet (non-radiology) images. These trained networks were then fine-tuned using the original and augmented radiology images. The model with highest validation accuracy was applied to our institutional test set and a publicly available set. Accuracy was assessed by using the Youden Index to set a binary cutoff for frontal or lateral classification. This retrospective study was IRB approved prior to initiation. A network pre-trained on 1.2 million greyscale ImageNet images and fine-tuned on augmented radiographs was chosen. The binary classification method correctly classified 100 % (95 % CI 99.73–100 %) of both our test set and the publicly available images. Classification was rapid, at 38 images per second. A deep convolutional neural network created using non-radiological images, and an augmented set of radiographs is effective in highly accurate classification of chest radiograph view type and is a feasible, rapid method for high-throughput annotation.
引用
收藏
页码:95 / 101
页数:6
相关论文
共 46 条
[1]  
Peng Y(2013)Quantitative analysis of multiparametric prostate Mr images: differentiation between prostate cancer and normal tissue and correlation with gleason score—a computer-aided diagnosis development study Radiology 267 787-96
[2]  
Jiang Y(1964)Automated computer analysis of radiographic images 1 Radiology 83 1029-34
[3]  
Yang C(1973)Pattern recognition of chest x-ray images Comput Graph Image Process 2 252-71
[4]  
Brown JB(2015)Deep learning Nature 521 436-44
[5]  
Antic T(2006)A fast learning algorithm for deep belief nets Neural Comput 18 1527-54
[6]  
Sethi I(2016)Urinary bladder segmentation in Ct urography using deep-learning convolutional neural network and level sets Med Phys 43 1882-96
[7]  
Meyers PH(2016)Preparing a collection of radiology examinations for distribution and retrieval J Am Med Inform Assoc 23 304-10
[8]  
Nice CM(2015)Imagenet large scale visual recognition challenge Int J Comput Vis 115 211-52
[9]  
Becker HC(1950)Index for rating diagnostic tests Cancer 3 32-5
[10]  
Nettleton WJ(1934)The use of confidence or fiducial limits illustrated in the case of the binomial Biometrika 26 404-13