Super-resolution and segmentation deep learning for breast cancer histopathology image analysis

被引:13
作者
Juhong, Aniwat [1 ,2 ]
Li, Bo [1 ,2 ]
Yao, Cheng-You [2 ,3 ]
Yang, Chia-Wei [2 ,4 ]
Agnew, Dalen W. [5 ]
Lei, Yu Leo [6 ]
Huang, Xuefei [2 ,3 ,4 ]
Piyawattanametha, Wibool [2 ,7 ]
Qiu, Zhen [1 ,2 ,3 ]
机构
[1] Michigan State Univ, Dept Elect & Comp Engn, E Lansing, MI 48823 USA
[2] Michigan State Univ, Inst Quantitat Hlth Sci & Engn, E Lansing, MI 48823 USA
[3] Michigan State Univ, Dept Biomed Engn, E Lansing, MI 48824 USA
[4] Michigan State Univ, Dept Chem, E Lansing, MI 48824 USA
[5] Michigan State Univ, Coll Vet Med, E Lansing, MI 48824 USA
[6] Univ Michigan, Dept Periodont Oral Med, Ann Arbor, MI 48104 USA
[7] King Mongkuts Inst Technol Ladkrabang KMITL, Sch Engn, Dept Biomed Engn, Bangkok 10520, Thailand
来源
BIOMEDICAL OPTICS EXPRESS | 2023年 / 14卷 / 01期
基金
美国国家科学基金会;
关键词
NUCLEI; DIAGNOSIS; NETWORK;
D O I
10.1364/BOE.463839
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Traditionally, a high-performance microscope with a large numerical aperture is required to acquire high-resolution images. However, the images' size is typically tremendous. Therefore, they are not conveniently managed and transferred across a computer network or stored in a limited computer storage system. As a result, image compression is commonly used to reduce image size resulting in poor image resolution. Here, we demonstrate custom convolution neural networks (CNNs) for both super-resolution image enhancement from low-resolution images and characterization of both cells and nuclei from hematoxylin and eosin (H&E) stained breast cancer histopathological images by using a combination of generator and discriminator networks so-called super-resolution generative adversarial network-based on aggregated residual transformation (SRGAN-ResNeXt) to facilitate cancer diagnosis in low resource settings. The results provide high enhancement in image quality where the peak signal-to-noise ratio and structural similarity of our network results are over 30 dB and 0.93, respectively. The derived performance is superior to the results obtained from both the bicubic interpolation and the well-known SRGAN deep-learning methods. In addition, another custom CNN is used to perform image segmentation from the generated high-resolution breast cancer images derived with our model with an average Intersection over Union of 0.869 and an average dice similarity coefficient of 0.893 for the H&E image segmentation results. Finally, we propose the jointly trained SRGAN-ResNeXt and Inception U-net Models, which applied the weights from the individually trained SRGAN-ResNeXt and inception U-net models as the pre-trained weights for transfer learning. The jointly trained model's results are progressively improved and promising. We anticipate these custom CNNs can help resolve the inaccessibility of advanced microscopes or whole slide imaging (WSI) systems to acquire high-resolution images from low-performance microscopes located in remote-constraint settings. (c) 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement
引用
收藏
页码:18 / 36
页数:19
相关论文
共 40 条
[1]   Benchmark Analysis of Representative Deep Neural Network Architectures [J].
Bianco, Simone ;
Cadene, Remi ;
Celona, Luigi ;
Napoletano, Paolo .
IEEE ACCESS, 2018, 6 :64270-64277
[2]   Evaluation of a Mobile Phone-Based Microscope for Screening of Schistosoma haematobium Infection in Rural Ghana [J].
Bogoch, Isaac I. ;
Koydemir, Hatice C. ;
Tseng, Derek ;
Ephraim, Richard K. D. ;
Duah, Evans ;
Tee, Joseph ;
Andrews, Jason R. ;
Ozcan, Aydogan .
AMERICAN JOURNAL OF TROPICAL MEDICINE AND HYGIENE, 2017, 96 (06) :1468-1471
[3]   Performance measure characterization for evaluating neuroimage segmentation algorithms [J].
Chang, Herng-Hua ;
Zhuang, Audrey H. ;
Valentino, Daniel J. ;
Chu, Woei-Chyn .
NEUROIMAGE, 2009, 47 (01) :122-135
[4]  
Chen SC, 2023, Arxiv, DOI arXiv:2102.06867
[5]   Human schistosomiasis [J].
Colley, Daniel G. ;
Bustinduy, Amaya L. ;
Secor, Evan ;
King, Charles H. .
LANCET, 2014, 383 (9936) :2253-2264
[6]  
de Haan K., 2020, ARXIV
[7]   Improved U-Nets with inception blocks for building detection [J].
Delibasoglu, Ibrahim ;
Cetin, Mufit .
JOURNAL OF APPLIED REMOTE SENSING, 2020, 14 (04)
[8]  
Filipczuk P, 2011, ADV INTEL SOFT COMPU, V102, P295
[9]   Hover-Net: Simultaneous segmentation and classification of nuclei in multi-tissue histology images [J].
Graham, Simon ;
Quoc Dang Vu ;
Raza, Shan E. Ahmed ;
Azam, Ayesha ;
Tsang, Yee Wah ;
Kwak, Jin Tae ;
Rajpoot, Nasir .
MEDICAL IMAGE ANALYSIS, 2019, 58
[10]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778