A comparative study of pre-trained convolutional neural networks for semantic segmentation of breast tumors in ultrasound

被引:57
作者
Gomez-Flores, Wilfrido [1 ]
de Albuquerque Pereira, Wagner Coelho [2 ]
机构
[1] Inst Politecn Nacl, Unidad Tamaulipas, Ctr Invest & Estudios Avanzados, Ciudad Victoria, Tamaulipas, Mexico
[2] Univ Fed Rio de Janeiro, Programa Engn Biomed COPPE, Rio De Janeiro, Brazil
关键词
Breast ultrasound; Breast tumors; Convolutional neural networks; Semantic segmentation; Transfer learning; COMPUTER-AIDED DIAGNOSIS; CLASSIFICATION; LESIONS; AGE;
D O I
10.1016/j.compbiomed.2020.104036
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
The automatic segmentation of breast tumors in ultrasound (BUS) has recently been addressed using convolutional neural networks (CNN). These CNN-based approaches generally modify a previously proposed CNN architecture or they design a new architecture using CNN ensembles. Although these methods have reported satisfactory results, the trained CNN architectures are often unavailable for reproducibility purposes. Moreover, these methods commonly learn from small BUS datasets with particular properties, which limits generalization in new cases. This paper evaluates four public CNN-based semantic segmentation models that were developed by the computer vision community, as follows: (1) Fully Convolutional Network (FCN) with AlexNet network, (2) U-Net network, (3) SegNet using VGG16 and VGG19 networks, and (4) DeepLabV3+ using ResNetl8, ResNet50, MobileNet-V2, and Xception networks. By transfer learning, these CNNs are fine-tuned to segment BUS images in normal and tumoral pixels. The goal is to select a potential CNN-based segmentation model to be further used in computer-aided diagnosis (CAD) systems. The main significance of this study is the comparison of eight well-established CNN architectures using a more extensive BUS dataset than those used by approaches that are currently found in the literature. More than 3000 BUS images acquired from seven US machine models are used for training and validation. The F1-score (F1s) and the Intersection over Union (IoU) quantify the segmentation performance. The segmentation models based on SegNet and DeepLabV3+ obtain the best results with F1s > 0.90 and IoU > 0.81. In the case of U-Net, the segmentation performance is F1s = 0.89 and IoU = 0.80, whereas FCN-AlexNet attains the lowest results with F1s = 0.84 and IoU = 0.73. In particular, ResNetl8 obtains F1s = 0.905 and IoU = 0.827 and requires less training time among SegNet and DeepLabV3+ networks. Hence, ResNetl8 is a potential candidate for implementing fully automated end-to-end CAD systems. The CNN models generated in this study are available to researchers at https://github.com/wgomezf/CNN-BUS-segment, which attempts to impact the fair comparison with other CNN-based segmentation approaches for BUS images.
引用
收藏
页数:10
相关论文
共 50 条
  • [41] Hierarchical Convolutional Neural Networks for Segmentation of Breast Tumors in MRI With Application to Radiogenomics
    Zhang, Jun
    Saha, Ashirbani
    Zhu, Zhe
    Mazurowski, Maciej A.
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2019, 38 (02) : 435 - 447
  • [42] ConvTimeNet: A Pre-trained Deep Convolutional Neural Network for Time Series Classification
    Kashiparekh, Kathan
    Narwariya, Jyoti
    Malhotra, Pankaj
    Vig, Lovekesh
    Shroff, Gautam
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [43] Hyperparameter optimization of pre-trained convolutional neural networks using adolescent identity search algorithm
    Ebubekir Akkuş
    Ufuk Bal
    Fatma Önay Koçoğlu
    Selami Beyhan
    Neural Computing and Applications, 2024, 36 : 1523 - 1537
  • [44] Hyperparameter optimization of pre-trained convolutional neural networks using adolescent identity search algorithm
    Akkus, Ebubekir
    Bal, Ufuk
    Kocoglu, Fatma Oenay
    Beyhan, Selami
    NEURAL COMPUTING & APPLICATIONS, 2024, 36 (04) : 1523 - 1537
  • [45] Budget Restricted Incremental Learning with Pre-Trained Convolutional Neural Networks and Binary Associative Memories
    Hacene, Ghouthi Boukli
    Gripon, Vincent
    Farrugia, Nicolas
    Arzel, Matthieu
    Jezequel, Michel
    JOURNAL OF SIGNAL PROCESSING SYSTEMS FOR SIGNAL IMAGE AND VIDEO TECHNOLOGY, 2019, 91 (09): : 1063 - 1073
  • [46] REAL-TIME INFORMATIVE LARYNGOSCOPIC FRAME CLASSIFICATION WITH PRE-TRAINED CONVOLUTIONAL NEURAL NETWORKS
    Galdran, Adrian
    Costa, P.
    Carnpilho, A.
    2019 IEEE 16TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2019), 2019, : 87 - 90
  • [47] Automated identification of Chagas disease vectors using AlexNet pre-trained convolutional neural networks
    Miranda, Vinicius L.
    Oliveira-Correia, Joao P. S.
    Galvao, Cleber
    Obara, Marcos T.
    Peterson, A. Townsend
    Gurgel-Goncalves, Rodrigo
    MEDICAL AND VETERINARY ENTOMOLOGY, 2025, 39 (02) : 291 - 300
  • [48] Budget Restricted Incremental Learning with Pre-Trained Convolutional Neural Networks and Binary Associative Memories
    Ghouthi Boukli Hacene
    Vincent Gripon
    Nicolas Farrugia
    Matthieu Arzel
    Michel Jezequel
    Journal of Signal Processing Systems, 2019, 91 : 1063 - 1073
  • [49] Ultrasound breast tumoral classification by a new adaptive pre-trained convolutive neural networks for computer-aided diagnosis
    Fatma Zohra Reguieg
    Nadjia Benblidia
    Multimedia Tools and Applications, 2024, 83 : 46249 - 46282
  • [50] COMPARATIVE ANALYSIS OF SELF-SUPERVISED PRE-TRAINED VISION TRANSFORMERS AND CONVOLUTIONAL NEURAL NETWORKS WITH CHEXNET IN CLASSIFYING LUNG CONDITIONS
    Elwirehardja, Gregorius natanael
    Liem, Steve marcello
    Adjie, Maria linneke
    Tjan, Farrel alexander
    Setiawan, Joselyn
    Syahputra, Muhammad edo
    Muljo, Hery harjono
    COMMUNICATIONS IN MATHEMATICAL BIOLOGY AND NEUROSCIENCE, 2025,