Improving Skin Lesion Segmentation with Self-Training

被引:1
作者
Dzieniszewska, Aleksandra [1 ]
Garbat, Piotr [1 ]
Piramidowicz, Ryszard [1 ]
机构
[1] Warsaw Univ Technol, Inst Microelect & Optoelect, PL-00662 Warsaw, Poland
关键词
deep learning; semi-supervised learning; skin lesion segmentation; skin cancer; dermoscopy images;
D O I
10.3390/cancers16061120
中图分类号
R73 [肿瘤学];
学科分类号
100214 ;
摘要
Simple Summary Finding the area of a skin lesion on dermoscopy images is important for diagnosing skin conditions. The accuracy of segmentation impacts the overall diagnosis. The quality of segmentation depends on the amount of labeled data that is hard to obtain because it requires a lot of time from experts. This study introduces a technique that enhances the segmentation process by using a combination of expert-generated and computer-generated labels. The method uses a trained model to generate labels for new data that are later used to improve the model. The findings suggest that this approach could make skin cancer detection tools more accurate and efficient, potentially making a big difference in the medical field, especially in situations where high-quality data are limited.Abstract Skin lesion segmentation plays a key role in the diagnosis of skin cancer; it can be a component in both traditional algorithms and end-to-end approaches. The quality of segmentation directly impacts the accuracy of classification; however, attaining optimal segmentation necessitates a substantial amount of labeled data. Semi-supervised learning allows for employing unlabeled data to enhance the results of the machine learning model. In the case of medical image segmentation, acquiring detailed annotation is time-consuming and costly and requires skilled individuals so the utilization of unlabeled data allows for a significant mitigation of manual segmentation efforts. This study proposes a novel approach to semi-supervised skin lesion segmentation using self-training with a Noisy Student. This approach allows for utilizing large amounts of available unlabeled images. It consists of four steps-first, training the teacher model on labeled data only, then generating pseudo-labels with the teacher model, training the student model on both labeled and pseudo-labeled data, and lastly, training the student* model on pseudo-labels generated with the student model. In this work, we implemented DeepLabV3 architecture as both teacher and student models. As a final result, we achieved a mIoU of 88.0% on the ISIC 2018 dataset and a mIoU of 87.54% on the PH2 dataset. The evaluation of the proposed approach shows that Noisy Student training improves the segmentation performance of neural networks in a skin lesion segmentation task while using only small amounts of labeled data.
引用
收藏
页数:22
相关论文
共 50 条
  • [1] [Anonymous], 2022, Skin Cancer Statistics
  • [2] Melanoma segmentation using deep learning with test-time augmentations and conditional random fields
    Ashraf, Hassan
    Waris, Asim
    Ghafoor, Muhammad Fazeel
    Gilani, Syed Omer
    Niazi, Imran Khan
    [J]. SCIENTIFIC REPORTS, 2022, 12 (01)
  • [3] Skin lesion segmentation from dermoscopic images by using Mask R-CNN, Retina-Deeplab, and graph-based methods
    Bagheri, Fatemeh
    Tarokh, Mohammad Jafar
    Ziaratban, Majid
    [J]. BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2021, 67
  • [4] Training on Polar Image Transformations Improves Biomedical Image Segmentation
    Bencevic, Marin
    Galic, Irena
    Habijan, Marija
    Babin, Danilo
    [J]. IEEE ACCESS, 2021, 9 : 133365 - 133375
  • [5] Codella NCF, 2018, Arxiv, DOI arXiv:1710.05006
  • [6] Cancer Research UK, 2022, Melanoma Skin Cancer Statistics
  • [7] Unsupervised border detection in dermoscopy images
    Celebi, M. Emre
    Aslandogan, Y. Alp
    Stoecker, William V.
    Iyatomi, Hitoshi
    Oka, Hiroshi
    Chen, Xiaohe
    [J]. SKIN RESEARCH AND TECHNOLOGY, 2007, 13 (04) : 454 - 462
  • [8] Chen LC, 2017, Arxiv, DOI [arXiv:1606.00915, DOI 10.48550/ARXIV.1606.00915]
  • [9] Chen LC, 2017, Arxiv, DOI arXiv:1706.05587
  • [10] Chen XK, 2021, Arxiv, DOI arXiv:2106.01226