A Convolutional Neural Network Framework for Accurate Skin Cancer Detection

被引:80
作者
Thurnhofer-Hemsi, Karl [1 ,2 ]
Dominguez, Enrique [1 ,2 ]
机构
[1] Univ Malaga, Dept Comp Languages & Comp Sci, Boulevar Louis Pasteur 35, Malaga 29071, Spain
[2] Biomed Res Inst Malaga IBIMA, C Doctor Miguel Diaz Recio 28, Malaga 29010, Spain
关键词
Image processing; Deep learning; Classification; Skin cancer; Melanoma; IMAGES; SYSTEM;
D O I
10.1007/s11063-020-10364-y
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Skin diseases have become a challenge in medical diagnosis due to visual similarities. Although melanoma is the best-known type of skin cancer, there are other pathologies that are the cause of many death in recent years. The lack of large datasets is one of the main difficulties to develop a reliable automatic classification system. This paper presents a deep learning framework for skin cancer detection. Transfer learning was applied to five state-of-art convolutional neural networks to create both a plain and a hierarchical (with 2 levels) classifiers that are capable to distinguish between seven types of moles. The HAM10000 dataset, a large collection of dermatoscopic images, were used for experiments, with the help of data augmentation techniques to improve performance. Results demonstrate that the DenseNet201 network is suitable for this task, achieving high classification accuracies and F-measures with lower false negatives. The plain model performed better than the 2-levels model, although the first level, i.e. a binary classification, between nevi and non-nevi yielded the best outcomes.
引用
收藏
页码:3073 / 3093
页数:21
相关论文
共 37 条
[1]  
American Cancer Society, 2016, CANC FACTS FIGURES
[2]  
Bakheet S, 2017, COMPUTATION, V5, DOI 10.3390/computation5010004
[3]   The Impact of Replacing Complex Hand-Crafted Features with Standard Features for Melanoma Classification Using Both Hand-Crafted and Deep Features [J].
Devassy, Binu Melit ;
Yildirim-Yayilgan, Sule ;
Hardeberg, Jon Yngve .
INTELLIGENT SYSTEMS AND APPLICATIONS, VOL 1, 2019, 868 :150-159
[4]   Robust feature spaces from pre-trained deep network layers for skin lesion classification [J].
dos Santos, Fernando Pereira ;
Ponti, Moacir A. .
PROCEEDINGS 2018 31ST SIBGRAPI CONFERENCE ON GRAPHICS, PATTERNS AND IMAGES (SIBGRAPI), 2018, :189-196
[5]   Learning physical properties in complex visual scenes: An intelligent machine for perceiving blood flow dynamics from static CT angiography imaging [J].
Gao, Zhifan ;
Wang, Xin ;
Sun, Shanhui ;
Wu, Dan ;
Bai, Junjie ;
Yin, Youbing ;
Liu, Xin ;
Zhang, Heye ;
de Albuquerque, Victor Hugo C. .
NEURAL NETWORKS, 2020, 123 :82-93
[6]   Privileged Modality Distillation for Vessel Border Detection in Intracoronary Imaging [J].
Gao, Zhifan ;
Chung, Jonathan ;
Abdelrazek, Mohamed ;
Leung, Stephanie ;
Hau, William Kongto ;
Xian, Zhanchao ;
Zhang, Heye ;
Li, Shuo .
IEEE TRANSACTIONS ON MEDICAL IMAGING, 2020, 39 (05) :1524-1534
[7]   Learning the implicit strain reconstruction in ultrasound elastography using privileged information [J].
Gao, Zhifan ;
Wu, Sitong ;
Liu, Zhi ;
Luo, Jianwen ;
Zhange, Heye ;
Gong, Mingming ;
Li, Shuo .
MEDICAL IMAGE ANALYSIS, 2019, 58
[8]   Densely Connected Convolutional Networks [J].
Huang, Gao ;
Liu, Zhuang ;
van der Maaten, Laurens ;
Weinberger, Kilian Q. .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :2261-2269
[9]  
Hussain Zeshan, 2017, AMIA Annu Symp Proc, V2017, P979
[10]  
Jafari MH, 2016, INT C PATT RECOG, P337, DOI 10.1109/ICPR.2016.7899656