An interpretable classifier for high-resolution breast cancer screening images utilizing weakly supervised localization

被引:121
作者
Shen, Yiqiu [1 ]
Wu, Nan [1 ]
Phang, Jason [1 ]
Park, Jungkyu [2 ]
Liu, Kangning [1 ]
Tyagi, Sudarshini [4 ]
Heacock, Laura [2 ,5 ]
Kim, S. Gene [2 ,3 ,5 ]
Moy, Linda [2 ,3 ,5 ]
Cho, Kyunghyun [1 ,4 ]
Geras, Krzysztof J. [1 ,2 ,3 ]
机构
[1] NYU, Ctr Data Sci, 60 5th Ave, New York, NY 10011 USA
[2] NYU Sch Med, Dept Radiol, 530 1st Ave, New York, NY 10016 USA
[3] NYU Langone Hlth, Ctr Adv Imaging Innovat & Res, 660 1st Ave, New York, NY 10016 USA
[4] NYU, Courant Inst Math Sci, Dept Comp Sci, 251 Mercer St, New York, NY 10012 USA
[5] NYU Langone Hlth, Perlmutter Canc Ctr, 160 E 34th St, New York, NY 10016 USA
基金
美国国家科学基金会; 美国国家卫生研究院;
关键词
Deep learning; Breast cancer screening; Weakly supervised localization; High-resolution image classification; FALSE-POSITIVE REDUCTION; MASS DETECTION; NEURAL-NETWORKS; MAMMOGRAPHY; SEGMENTATION; MORTALITY; UPDATE; SYSTEM; RISK;
D O I
10.1016/j.media.2020.101908
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Medical images differ from natural images in significantly higher resolutions and smaller regions of interest. Because of these differences, neural network architectures that work well for natural images might not be applicable to medical image analysis. In this work, we propose a novel neural network model to address these unique properties of medical images. This model first uses a low-capacity, yet memory efficient, network on the whole image to identify the most informative regions. It then applies another higher-capacity network to collect details from chosen regions. Finally, it employs a fusion module that aggregates global and local information to make a prediction. While existing methods often require lesion segmentation during training, our model is trained with only image-level labels and can generate pixel-level saliency maps indicating possible malignant findings. We apply the model to screening mammography interpretation: predicting the presence or absence of benign and malignant lesions. On the NYU Breast Cancer Screening Dataset, our model outperforms (AUC = 0.93) ResNet-34 and Faster R-CNN in classifying breasts with malignant findings. On the CBIS-DDSM dataset, our model achieves performance (AUC = 0.858) on par with state-of-the-art approaches. Compared to ResNet-34, our model is 4.1x faster for inference while using 78.4% less GPU memory. Furthermore, we demonstrate, in a reader study, that our model surpasses radiologist-level AUC by a margin of 0.11. (c) 2020 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license ( http://creativecommons.org/licenses/by-nc-nd/4.0/ )
引用
收藏
页数:17
相关论文
共 94 条
[1]  
[Anonymous], 2014, Comput. Sci.
[2]  
[Anonymous], 2015, ARXIV150404003
[3]  
Bahdanau D, 2016, Arxiv, DOI arXiv:1409.0473
[4]  
Bergstra J, 2012, J MACH LEARN RES, V13, P281
[5]   Weakly Supervised Deep Detection Networks [J].
Bilen, Hakan ;
Vedaldi, Andrea .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :2846-2854
[6]   Clinical-grade computational pathology using weakly supervised deep learning on whole slide images [J].
Campanella, Gabriele ;
Hanna, Matthew G. ;
Geneslaw, Luke ;
Miraflor, Allen ;
Silva, Vitor Werneck Krauss ;
Busam, Klaus J. ;
Brogi, Edi ;
Reuter, Victor E. ;
Klimstra, David S. ;
Fuchs, Thomas J. .
NATURE MEDICINE, 2019, 25 (08) :1301-+
[7]  
Canziani A., 2016, arXiv preprint arXiv:1605.07678
[8]   Evaluating Weakly Supervised Object Localization Methods Right [J].
Choe, Junsuk ;
Oh, Seong Joon ;
Lee, Seungho ;
Chun, Sanghyuk ;
Akata, Zeynep ;
Shim, Hyunjung .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :3130-3139
[9]  
Codella Noel, 2015, Machine Learning in Medical Imaging. 6th International Workshop, MLMI 2015, held in conjunction with MICCAI 2015. Proceedings: LNCS 9352, P118, DOI 10.1007/978-3-319-24888-2_15
[10]   Classification and mutation prediction from non-small cell lung cancer histopathology images using deep learning [J].
Coudray, Nicolas ;
Ocampo, Paolo Santiago ;
Sakellaropoulos, Theodore ;
Narula, Navneet ;
Snuderl, Matija ;
Fenyo, David ;
Moreira, Andre L. ;
Razavian, Narges ;
Tsirigos, Aristotelis .
NATURE MEDICINE, 2018, 24 (10) :1559-+