A comparative study of pre-trained convolutional neural networks for semantic segmentation of breast tumors in ultrasound

被引:57
|
作者
Gomez-Flores, Wilfrido [1 ]
de Albuquerque Pereira, Wagner Coelho [2 ]
机构
[1] Inst Politecn Nacl, Unidad Tamaulipas, Ctr Invest & Estudios Avanzados, Ciudad Victoria, Tamaulipas, Mexico
[2] Univ Fed Rio de Janeiro, Programa Engn Biomed COPPE, Rio De Janeiro, Brazil
关键词
Breast ultrasound; Breast tumors; Convolutional neural networks; Semantic segmentation; Transfer learning; COMPUTER-AIDED DIAGNOSIS; CLASSIFICATION; LESIONS; AGE;
D O I
10.1016/j.compbiomed.2020.104036
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
The automatic segmentation of breast tumors in ultrasound (BUS) has recently been addressed using convolutional neural networks (CNN). These CNN-based approaches generally modify a previously proposed CNN architecture or they design a new architecture using CNN ensembles. Although these methods have reported satisfactory results, the trained CNN architectures are often unavailable for reproducibility purposes. Moreover, these methods commonly learn from small BUS datasets with particular properties, which limits generalization in new cases. This paper evaluates four public CNN-based semantic segmentation models that were developed by the computer vision community, as follows: (1) Fully Convolutional Network (FCN) with AlexNet network, (2) U-Net network, (3) SegNet using VGG16 and VGG19 networks, and (4) DeepLabV3+ using ResNetl8, ResNet50, MobileNet-V2, and Xception networks. By transfer learning, these CNNs are fine-tuned to segment BUS images in normal and tumoral pixels. The goal is to select a potential CNN-based segmentation model to be further used in computer-aided diagnosis (CAD) systems. The main significance of this study is the comparison of eight well-established CNN architectures using a more extensive BUS dataset than those used by approaches that are currently found in the literature. More than 3000 BUS images acquired from seven US machine models are used for training and validation. The F1-score (F1s) and the Intersection over Union (IoU) quantify the segmentation performance. The segmentation models based on SegNet and DeepLabV3+ obtain the best results with F1s > 0.90 and IoU > 0.81. In the case of U-Net, the segmentation performance is F1s = 0.89 and IoU = 0.80, whereas FCN-AlexNet attains the lowest results with F1s = 0.84 and IoU = 0.73. In particular, ResNetl8 obtains F1s = 0.905 and IoU = 0.827 and requires less training time among SegNet and DeepLabV3+ networks. Hence, ResNetl8 is a potential candidate for implementing fully automated end-to-end CAD systems. The CNN models generated in this study are available to researchers at https://github.com/wgomezf/CNN-BUS-segment, which attempts to impact the fair comparison with other CNN-based segmentation approaches for BUS images.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] A comparative study of pre-trained models in breast ultrasound image segmentation
    Honi, Dhafer G.
    Nsaif, Mohammed
    Szathmary, Laszlo
    Szeghalmy, Szilvia
    2024 IEEE 3RD CONFERENCE ON INFORMATION TECHNOLOGY AND DATA SCIENCE, CITDS 2024, 2024, : 81 - 86
  • [2] Pre-Trained Convolutional Neural Networks for Breast Cancer Detection Using Ultrasound Images
    Masud, Mehedi
    Hossain, M. Shamim
    Alhumyani, Hesham
    Alshamrani, Sultan S.
    Cheikhrouhou, Omar
    Ibrahim, Saleh
    Muhammad, Ghulam
    Rashed, Amr E. Eldin
    Gupta, B. B.
    ACM TRANSACTIONS ON INTERNET TECHNOLOGY, 2021, 21 (04)
  • [3] Semantic Segmentation of Mammograms Using Pre-Trained Deep Neural Networks
    Prates, Rodrigo Leite
    Gomez-Flores, Wilfrido
    Pereira, Wagner
    2021 18TH INTERNATIONAL CONFERENCE ON ELECTRICAL ENGINEERING, COMPUTING SCIENCE AND AUTOMATIC CONTROL (CCE 2021), 2021,
  • [4] A Comparative Study of Three Pre-trained Convolutional Neural Networks in the Detection of Violence Against Women
    Aguilar, Ivan Gaytan
    Contreras, Alejandro Aguilar
    Eleuterio, Roberto Alejo
    Lara, Erendira Rendon
    Pina, Grisel Miranda
    Gutierrez, Everardo E. Granda
    CIENCIA ERGO-SUM, 2023, 31 (02)
  • [5] Recognizing breast tumors based on mammograms combined with pre-trained neural networks
    Bai, Yujie
    Li, Min
    Ma, Xiaojian
    Gan, Xiaojing
    Chen, Cheng
    Chen, Chen
    Lv, Xiaoyi
    Li, Hongtao
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (18) : 27989 - 28008
  • [6] Recognizing breast tumors based on mammograms combined with pre-trained neural networks
    Yujie Bai
    Min Li
    Xiaojian Ma
    Xiaojing Gan
    Cheng Chen
    Chen Chen
    Xiaoyi Lv
    Hongtao Li
    Multimedia Tools and Applications, 2023, 82 : 27989 - 28008
  • [7] Comparative Study of Fine-Tuning of Pre-Trained Convolutional Neural Networks for Diabetic Retinopathy Screening
    Mohammadian, Saboora
    Karsaz, Ali
    Roshan, Yaser M.
    2017 24TH NATIONAL AND 2ND INTERNATIONAL IRANIAN CONFERENCE ON BIOMEDICAL ENGINEERING (ICBME), 2017, : 224 - 229
  • [8] Pre-trained Convolutional Neural Networks for the Lung Sounds Classification
    Vaityshyn, Valentyn
    Porieva, Hanna
    Makarenkova, Anastasiia
    2019 IEEE 39TH INTERNATIONAL CONFERENCE ON ELECTRONICS AND NANOTECHNOLOGY (ELNANO), 2019, : 522 - 525
  • [9] A Comparative Study of Neural Computing Approaches for Semantic Segmentation of breast Tumors on Ultrasound Images
    Eduardo Aguilar-Cannacho, Luis
    Gomez-Flores, Wilfrido
    Humberto Sossa-Azuela, Juan
    XXVII BRAZILIAN CONGRESS ON BIOMEDICAL ENGINEERING, CBEB 2020, 2022, : 1649 - 1657
  • [10] CONVOLUTIONAL NEURAL NETWORKS FOR DIALOGUE STATE TRACKING WITHOUT PRE-TRAINED WORD VECTORS OR SEMANTIC DICTIONARIES
    Korpusik, Mandy
    Glass, James
    2018 IEEE WORKSHOP ON SPOKEN LANGUAGE TECHNOLOGY (SLT 2018), 2018, : 884 - 891