An Improved Multi-Scale Feature Fusion for Skin Lesion Segmentation

被引:8
作者
Liu, Luzhou [1 ]
Zhang, Xiaoxia [1 ]
Li, Yingwei [1 ]
Xu, Zhinan [1 ]
机构
[1] Univ Sci & Technol Liaoning, Sch Comp Sci & Software Engn, Anshan 114051, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 14期
关键词
skin lesions; image segmentation; deep learning; atrous convolution; UNet3+; U-NET ARCHITECTURE; BORDER DETECTION; IMAGES; DIAGNOSIS; DENSEASPP; NETWORK; CANCER;
D O I
10.3390/app13148512
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Accurate segmentation of skin lesions is still a challenging task for automatic diagnostic systems because of the significant shape variations and blurred boundaries of the lesions. This paper proposes a multi-scale convolutional neural network, REDAUNet, based on UNet3+ to enhance network performance for practical applications in skin segmentation. First, the network employs a new encoder module composed of four feature extraction layers through two cross-residual (CR) units. This configuration allows the module to extract deep semantic information while avoiding gradient vanishing problems. Subsequently, a lightweight and efficient channel attention (ECA) module is introduced during the encoder's feature extraction stage. The attention module assigns suitable weights to channels through attention learning and effectively captures inter-channel interaction information. Finally, the densely connected atrous spatial pyramid pooling module (DenseASPP) module is inserted between the encoder and decoder paths. This module integrates dense connections and ASPP, as well as multi-scale information fusion, to recognize lesions of varying sizes. The experimental studies in this paper were constructed on two public skin lesion datasets, namely, ISIC-2018 and ISIC-2017. The experimental results show that our model is more accurate in segmenting lesions of different shapes and achieves state-of-the-art performance in segmentation. In comparison to UNet3+, the proposed REDAUNet model shows improvements of 2.01%, 4.33%, and 2.68% in Dice, Spec, and mIoU metrics, respectively. These results suggest that REDAUNet is well-suited for skin lesion segmentation and can be effectively employed in computer-aided systems.
引用
收藏
页数:21
相关论文
共 47 条
  • [1] Lesion border detection in dermoscopy images using dynamic programming
    Abbas, Qaisar
    Emre Celebi, M.
    Fondon Garcia, Irene
    Rashid, Muhammad
    [J]. SKIN RESEARCH AND TECHNOLOGY, 2011, 17 (01) : 91 - 100
  • [2] Abraham Shilpa Elsa, 2022, Computer Vision and Image Processing: 6th International Conference, CVIP 2021, Revised Selected Papers. Communications in Computer and Information Science (1568), P85, DOI 10.1007/978-3-031-11349-9_8
  • [3] Multiscale Attention U-Net for Skin Lesion Segmentation
    Alahmadi, Mohammad D.
    [J]. IEEE ACCESS, 2022, 10 : 59145 - 59154
  • [4] Asadi-Aghbolaghi M, 2020, Arxiv, DOI arXiv:2003.05056
  • [5] Melanoma segmentation using deep learning with test-time augmentations and conditional random fields
    Ashraf, Hassan
    Waris, Asim
    Ghafoor, Muhammad Fazeel
    Gilani, Syed Omer
    Niazi, Imran Khan
    [J]. SCIENTIFIC REPORTS, 2022, 12 (01)
  • [6] Azad R., 2022, arXiv
  • [7] EPILUMINESCENCE MICROSCOPY - A USEFUL TOOL FOR THE DIAGNOSIS OF PIGMENTED SKIN-LESIONS FOR FORMALLY TRAINED DERMATOLOGISTS
    BINDER, M
    SCHWARZ, M
    WINKLER, A
    STEINER, A
    KAIDER, A
    WOLFF, K
    PEHAMBERGER, H
    [J]. ARCHIVES OF DERMATOLOGY, 1995, 131 (03) : 286 - 291
  • [8] Lesion Border Detection in Dermoscopy Images Using Ensembles of Thresholding Methods
    Celebi, M. Emre
    Wen, Quan
    Hwang, Sae
    Iyatomi, Hitoshi
    Schaefer, Gerald
    [J]. SKIN RESEARCH AND TECHNOLOGY, 2013, 19 (01) : E252 - E258
  • [9] Automatic detection of blue-white veil and related structures in dermoscopy images
    Celebi, M. Emre
    Iyatomi, Hitoshi
    Stoecker, William V.
    Moss, Randy H.
    Rabinovitz, Harold S.
    Argenziano, Giuseppe
    Soyer, H. Peter
    [J]. COMPUTERIZED MEDICAL IMAGING AND GRAPHICS, 2008, 32 (08) : 670 - 677
  • [10] Chen J, 2021, arXiv