DFE-Net: Dual-branch feature extraction network for Enhanced segmentation in skin lesion

被引:12
作者
Fan, Chao [1 ,2 ]
Yang, Litao [3 ]
Lin, Hao [3 ]
Qiu, Yingying [3 ]
机构
[1] Henan Univ Technol, Sch Artificial Intelligence & Big Data, Zhengzhou, Henan, Peoples R China
[2] Minist Educ, Key Lab Grain Informat Proc & Control, Zhengzhou, Henan, Peoples R China
[3] Henan Univ Technol, Sch Informat Sci & Engn, Zhengzhou 450001, Henan, Peoples R China
基金
中国国家自然科学基金;
关键词
Skin Lesion Segmentation; Transformer Efficient Channel Attention; Global Feature; Local Feature; ATTENTION;
D O I
10.1016/j.bspc.2022.104423
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Skin lesion segmentation is a critical method for extracting pathological information from dermoscopy images, which is of great significance for lesion location, recognition, monitoring and treatment. Due to the varied sizes of lesions, different forms and colors, and blurring boundaries, the present detection methods are unable to accurately anticipate local features, which has an impact on the segmentation accuracy. Therefore, the Dualbranch Feature Extraction Network (DFE-Net) is suggested in this paper. We designed two types of encoders based on the Transformer and Efficient Channel Attention (ECA) module, one for extracting the global feature and another for the local feature, before fusing and decoding. Simultaneously, the skip connections from the Enhanced ECA Feature Extraction Modules to decoders are designed to reduce feature loss during decoding, this will restore the boundaries and local features to the greatest extent possible. We test the model on three publicly available skin lesion datasets: ISIC-2018, ISIC-2016&PH2. The results show that our model is superior to other previous methods, and the segmentation characteristics are more accurate representations of the actual lesion.
引用
收藏
页数:12
相关论文
共 22 条
[1]  
Chen J., 2021, arXiv
[2]   DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs [J].
Chen, Liang-Chieh ;
Papandreou, George ;
Kokkinos, Iasonas ;
Murphy, Kevin ;
Yuille, Alan L. .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (04) :834-848
[3]  
Dosovitskiy A, 2021, Arxiv, DOI arXiv:2010.11929
[4]   Dual Attention Network for Scene Segmentation [J].
Fu, Jun ;
Liu, Jing ;
Tian, Haijie ;
Li, Yong ;
Bao, Yongjun ;
Fang, Zhiwei ;
Lu, Hanqing .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :3141-3149
[5]   Dual attention enhancement feature fusion network for segmentation and quantitative analysis of paediatric echocardiography [J].
Guo, Libao ;
Lei, Baiying ;
Chen, Weiling ;
Du, Jie ;
Frangi, Alejandro F. ;
Qin, Jing ;
Zhao, Cheng ;
Shi, Pengpeng ;
Xia, Bei ;
Wang, Tianfu .
MEDICAL IMAGE ANALYSIS, 2021, 71
[6]  
Ho JAT, 2019, Arxiv, DOI [arXiv:1912.12180, DOI 10.48550/ARXIV.1912.12180]
[7]   Squeeze-and-Excitation Networks [J].
Hu, Jie ;
Shen, Li ;
Albanie, Samuel ;
Sun, Gang ;
Wu, Enhua .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2020, 42 (08) :2011-2023
[8]   Tell Me Where to Look: Guided Attention Inference Network [J].
Li, Kunpeng ;
Wu, Ziyan ;
Peng, Kuan-Chuan ;
Ernst, Jan ;
Fu, Yun .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :9215-9223
[9]  
Long J, 2015, PROC CVPR IEEE, P3431, DOI 10.1109/CVPR.2015.7298965
[10]   Knowing When to Look: Adaptive Attention via A Visual Sentinel for Image Captioning [J].
Lu, Jiasen ;
Xiong, Caiming ;
Parikh, Devi ;
Socher, Richard .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :3242-3250