Improving Classification of Breast Cancer by Utilizing the Image Pyramids of Whole-Slide Imaging and Multi-Scale Convolutional Neural Networks

被引:0
作者
Tong, Li [1 ,2 ]
Sha, Ying [3 ]
Wang, May D. [1 ,2 ]
机构
[1] Georgia Inst Technol, Dept Biomed Engn, Atlanta, GA 30332 USA
[2] Emory Univ, Atlanta, GA 30332 USA
[3] Georgia Inst Technol, Sch Biol, Atlanta, GA 30332 USA
来源
2019 IEEE 43RD ANNUAL COMPUTER SOFTWARE AND APPLICATIONS CONFERENCE (COMPSAC), VOL 1 | 2019年
基金
美国国家科学基金会;
关键词
Whole-Slide Imaging; Breast Cancer; Image Pyramid; Multi-Scale Convolutional Neural Network; PATHOLOGY;
D O I
10.1109/COMPSAC.2019.00105
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Whole-slide imaging (WSI) is the digitization of conventional glass slides. Automatic computer-aided diagnosis (CAD) based on WSI enables digital pathology and the integration of pathology with other data like genomic biomarkers. Numerous computational algorithms have been developed for WSI, with most of them taking the image patches cropped from the highest resolution as the input. However, these models exploit only the local information within each patch and lost the connections between the neighboring patches, which may contain important context information. In this paper, we propose a novel multi-scale convolutional network (ConvNet) to utilize the built-in image pyramids of WSI. For the concentric image patches cropped at the same location of different resolution levels, we hypothesize the extra input images from lower magnifications will provide context information to enhance the prediction of patch images. We build corresponding ConvNets for feature representation and then combine the extracted features by 1) late fusion: concatenation or averaging the feature vectors before performing classification, 2) early fusion: merge the ConvNet feature maps. We have applied the multi-scale networks to a benchmark breast cancer WSI dataset. Extensive experiments have demonstrated that our multi-scale networks utilizing the WSI image pyramids can achieve higher accuracy for the classification of breast cancer. The late fusion method by taking the average of feature vectors reaches the highest accuracy (81.50%), which is promising for the application of multi-scale analysis of WSI.
引用
收藏
页码:696 / 703
页数:8
相关论文
共 19 条
  • [1] Classification of breast cancer histology images using Convolutional Neural Networks
    Araujo, Teresa
    Aresta, Guilherme
    Castro, Eduardo
    Rouco, Jose
    Aguiar, Paulo
    Eloy, Catarina
    Polonia, Antonio
    Campilho, Aurelio
    [J]. PLOS ONE, 2017, 12 (06):
  • [2] Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
  • [3] Digital Imaging in Pathology: Whole-Slide Imaging and Beyond
    Ghaznavi, Farzad
    Evans, Andrew
    Madabhushi, Anant
    Feldman, Michael
    [J]. ANNUAL REVIEW OF PATHOLOGY: MECHANISMS OF DISEASE, VOL 8, 2013, 8 : 331 - 359
  • [4] He KM, 2017, IEEE I CONF COMP VIS, P2980, DOI [10.1109/TPAMI.2018.2844175, 10.1109/ICCV.2017.322]
  • [5] Patch-based Convolutional Neural Network for Whole Slide Tissue Image Classification
    Hou, Le
    Samaras, Dimitris
    Kurc, Tahsin M.
    Gao, Yi
    Davis, James E.
    Saltz, Joel H.
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 2424 - 2433
  • [6] Kothari Sonal, 2012, ACM BCB, V2012, P218, DOI 10.1145/2382936.2382964
  • [7] Pathology imaging informatics for quantitative analysis of whole-slide images
    Kothari, Sonal
    Phan, John H.
    Stokes, Todd H.
    Wang, May D.
    [J]. JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, 2013, 20 (06) : 1099 - 1108
  • [8] Lung Nodule Classification Using Deep Features in CT Images
    Kumar, Devinder
    Wong, Alexander
    Clausi, David A.
    [J]. 2015 12TH CONFERENCE ON COMPUTER AND ROBOT VISION CRV 2015, 2015, : 133 - 138
  • [9] Microsoft COCO: Common Objects in Context
    Lin, Tsung-Yi
    Maire, Michael
    Belongie, Serge
    Hays, James
    Perona, Pietro
    Ramanan, Deva
    Dollar, Piotr
    Zitnick, C. Lawrence
    [J]. COMPUTER VISION - ECCV 2014, PT V, 2014, 8693 : 740 - 755
  • [10] LIN TY, 2017, PROC CVPR IEEE, P936, DOI DOI 10.1109/CVPR.2017.106