A comparative study of pre-trained convolutional neural networks for semantic segmentation of breast tumors in ultrasound

被引:57
|
作者
Gomez-Flores, Wilfrido [1 ]
de Albuquerque Pereira, Wagner Coelho [2 ]
机构
[1] Inst Politecn Nacl, Unidad Tamaulipas, Ctr Invest & Estudios Avanzados, Ciudad Victoria, Tamaulipas, Mexico
[2] Univ Fed Rio de Janeiro, Programa Engn Biomed COPPE, Rio De Janeiro, Brazil
关键词
Breast ultrasound; Breast tumors; Convolutional neural networks; Semantic segmentation; Transfer learning; COMPUTER-AIDED DIAGNOSIS; CLASSIFICATION; LESIONS; AGE;
D O I
10.1016/j.compbiomed.2020.104036
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
The automatic segmentation of breast tumors in ultrasound (BUS) has recently been addressed using convolutional neural networks (CNN). These CNN-based approaches generally modify a previously proposed CNN architecture or they design a new architecture using CNN ensembles. Although these methods have reported satisfactory results, the trained CNN architectures are often unavailable for reproducibility purposes. Moreover, these methods commonly learn from small BUS datasets with particular properties, which limits generalization in new cases. This paper evaluates four public CNN-based semantic segmentation models that were developed by the computer vision community, as follows: (1) Fully Convolutional Network (FCN) with AlexNet network, (2) U-Net network, (3) SegNet using VGG16 and VGG19 networks, and (4) DeepLabV3+ using ResNetl8, ResNet50, MobileNet-V2, and Xception networks. By transfer learning, these CNNs are fine-tuned to segment BUS images in normal and tumoral pixels. The goal is to select a potential CNN-based segmentation model to be further used in computer-aided diagnosis (CAD) systems. The main significance of this study is the comparison of eight well-established CNN architectures using a more extensive BUS dataset than those used by approaches that are currently found in the literature. More than 3000 BUS images acquired from seven US machine models are used for training and validation. The F1-score (F1s) and the Intersection over Union (IoU) quantify the segmentation performance. The segmentation models based on SegNet and DeepLabV3+ obtain the best results with F1s > 0.90 and IoU > 0.81. In the case of U-Net, the segmentation performance is F1s = 0.89 and IoU = 0.80, whereas FCN-AlexNet attains the lowest results with F1s = 0.84 and IoU = 0.73. In particular, ResNetl8 obtains F1s = 0.905 and IoU = 0.827 and requires less training time among SegNet and DeepLabV3+ networks. Hence, ResNetl8 is a potential candidate for implementing fully automated end-to-end CAD systems. The CNN models generated in this study are available to researchers at https://github.com/wgomezf/CNN-BUS-segment, which attempts to impact the fair comparison with other CNN-based segmentation approaches for BUS images.
引用
收藏
页数:10
相关论文
共 50 条
  • [21] Efficient pollen grain classification using pre-trained Convolutional Neural Networks: a comprehensive study
    Masoud A. Rostami
    Behnaz Balmaki
    Lee A. Dyer
    Julie M. Allen
    Mohamed F. Sallam
    Fabrizio Frontalini
    Journal of Big Data, 10
  • [22] CONVOLUTIONAL NEURAL NETWORKS FOR OMNIDIRECTIONAL IMAGE QUALITY ASSESSMENT: PRE-TRAINED OR RE-TRAINED?
    Sendjasni, Abderrezzaq
    Larabi, Mohamed-Chaker
    Cheikh, Faouzi Alaya
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 3413 - 3417
  • [23] Recognizing Malaysia Traffic Signs with Pre-Trained Deep Convolutional Neural Networks
    How, Dickson Neoh Tze
    Sahari, Khairul Salleh Mohamed
    Hou, Yew Cheong
    Basubeit, Omar Gumaan Saleh
    2019 4TH INTERNATIONAL CONFERENCE ON CONTROL, ROBOTICS AND CYBERNETICS (CRC 2019), 2019, : 109 - 113
  • [24] Age Estimation Based on Face Images and Pre-trained Convolutional Neural Networks
    Anand, Abhinav
    Labati, Ruggero Donida
    Genovese, Angelo
    Munoz, Enrique
    Piuri, Vincenzo
    Scotti, Fabio
    2017 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2017, : 3357 - 3363
  • [25] Efficient Aspect Object Models Using Pre-trained Convolutional Neural Networks
    Wilkinson, Eric
    Takahashi, Takeshi
    2015 IEEE-RAS 15TH INTERNATIONAL CONFERENCE ON HUMANOID ROBOTS (HUMANOIDS), 2015, : 284 - 289
  • [26] The Impact of Padding on Image Classification by Using Pre-trained Convolutional Neural Networks
    Tang, Hongxiang
    Ortis, Alessandro
    Battiato, Sebastiano
    IMAGE ANALYSIS AND PROCESSING - ICIAP 2019, PT II, 2019, 11752 : 337 - 344
  • [27] Filter pruning by image channel reduction in pre-trained convolutional neural networks
    Chung, Gi Su
    Won, Chee Sun
    MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (20) : 30817 - 30826
  • [28] Filter pruning by image channel reduction in pre-trained convolutional neural networks
    Gi Su Chung
    Chee Sun Won
    Multimedia Tools and Applications, 2021, 80 : 30817 - 30826
  • [29] Transfer learning with pre-trained deep convolutional neural networks for the automatic assessment of liver steatosis in ultrasound images
    Constantinescu, Elena Codruta
    Udristoiu, Anca-Loredana
    Udristoiu, Stefan Cristinel
    Iacob, Andreea Valentina
    Gruionu, Lucian Gheorghe
    Gruionu, Gabriel
    Sandulescu, Larisa
    Saftoiu, Adrian
    MEDICAL ULTRASONOGRAPHY, 2021, 23 (02) : 135 - 139
  • [30] Predicting Breast Cancer Malignancy On DCE-MRI Data Using Pre-Trained Convolutional Neural Networks
    Antropova, N.
    Huynh, B.
    Giger, M.
    MEDICAL PHYSICS, 2016, 43 (06) : 3349 - 3350