Accurate and reproducible invasive breast cancer detection in whole-slide images: A Deep Learning approach for quantifying tumor extent

被引:335
作者
Cruz-Roa, Angel [1 ,2 ]
Gilmore, Hannah [3 ]
Basavanhally, Ajay [4 ]
Feldman, Michael [5 ]
Ganesan, Shridar [6 ]
Shih, Natalie N. C. [5 ]
Tomaszewski, John [7 ]
Gonzalez, Fabio A. [1 ]
Madabhushi, Anant [8 ]
机构
[1] Univ Nacl Colombia, Bogota, Colombia
[2] Univ Llanos, Villavicencio, Colombia
[3] Univ Hosp Case Med Ctr, Cleveland, OH USA
[4] Inspirata Inc, Tampa, FL USA
[5] Hosp Univ Penn, 3400 Spruce St, Philadelphia, PA 19104 USA
[6] Canc Inst New Jersey, New Brunswick, NJ USA
[7] Univ Buffalo State Univ New York, Buffalo, NY USA
[8] Case Western Reserve Univ, Cleveland, OH 44106 USA
基金
美国国家卫生研究院;
关键词
INTEROBSERVER VARIABILITY; CARCINOMA; PATHOLOGY; AGREEMENT; RICHARDSON; FRAMEWORK; SCHEME; GRADE; BLOOM;
D O I
10.1038/srep46450
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
With the increasing ability to routinely and rapidly digitize whole slide images with slide scanners, there has been interest in developing computerized image analysis algorithms for automated detection of disease extent from digital pathology images. The manual identification of presence and extent of breast cancer by a pathologist is critical for patient management for tumor staging and assessing treatment response. However, this process is tedious and subject to inter- and intra-reader variability. For computerized methods to be useful as decision support tools, they need to be resilient to data acquired from different sources, different staining and cutting protocols and different scanners. The objective of this study was to evaluate the accuracy and robustness of a deep learning-based method to automatically identify the extent of invasive tumor on digitized images. Here, we present a new method that employs a convolutional neural network for detecting presence of invasive tumor on whole slide images. Our approach involves training the classifier on nearly 400 exemplars from multiple different sites, and scanners, and then independently validating on almost 200 cases from The Cancer Genome Atlas. Our approach yielded a Dice coefficient of 75.86%, a positive predictive value of 71.62% and a negative predictive value of 96.77% in terms of pixel-by-pixel evaluation compared to manually annotated regions of invasive ductal carcinoma.
引用
收藏
页数:14
相关论文
共 58 条
[1]  
Cruz-Roa AA, 2013, LECT NOTES COMPUT SC, V8150, P403, DOI 10.1007/978-3-642-40763-5_50
[2]  
[Anonymous], 2013, FOUND TRENDS SIGNAL, DOI DOI 10.1561/2000000039
[3]  
[Anonymous], 2011, BIGLEARN NIPS WORKSH
[4]  
[Anonymous], 10 INT S MED INF PRO
[5]  
[Anonymous], 2014, INT C MACH LEARN ICM
[6]  
[Anonymous], 2012, INT C MACH LEARN
[7]  
AREVALO JOHN, 2014, rev.fac.med, V22, P79
[8]   An unsupervised feature learning framework for basal cell carcinoma image analysis [J].
Arevalo, John ;
Cruz-Roa, Angel ;
Arias, Viviana ;
Romero, Eduardo ;
Gonzalez, Fabio A. .
ARTIFICIAL INTELLIGENCE IN MEDICINE, 2015, 64 (02) :131-145
[9]   Multi-Field-of-View Framework for Distinguishing Tumor Grade in ER plus Breast Cancer From Entire Histopathology Slides [J].
Basavanhally, Ajay ;
Ganesan, Shridar ;
Feldman, Michael ;
Shih, Natalie ;
Mies, Carolyn ;
Tomaszewski, John ;
Madabhushi, Anant .
IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 2013, 60 (08) :2089-2099
[10]  
Basavanhally Ajay, 2011, J Pathol Inform, V2, pS1, DOI 10.4103/2153-3539.92027