CSUnet: a dual attention and hybrid convolutional network for polyp segmentation

被引:0
作者
Liu, Shangwang [1 ,2 ]
Si, Feiyan [1 ]
Lin, Yinghai [1 ]
机构
[1] Henan Normal Univ, Sch Comp & Informat Engn, 46 Jianshe East Rd, Xinxiang 453000, Henan, Peoples R China
[2] Henan Normal Univ, Engn Lab Intelligence Business Internet Things, 46 Jianshe East Rd, Xinxiang 453000, Henan, Peoples R China
关键词
Deep learning; Convolutional network; Attention mechanism; Polyp segmentation;
D O I
10.1007/s11760-024-03485-7
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Medical polyp segmentation is a key task in the field of computer vision and medical image analysis, which aims to automatically detect and segment polyps in medical images for early cancer screening and treatment planning. Neural networks, in particular convolutional neural networks (CNNS), play a key role in this task. We present CSUNet, a traditional U-shaped encoder-decoder network architecture specifically designed for polyp segmentation. Firstly, the hybrid convolution mechanism was used to effectively capture the features of different scales and improve the segmentation performance. In addition, an attention module is added to the skip connection to facilitate efficient feature map extraction and fusion. We trained and verified the network with two public datasets: Kvasir-SEG and CVC-ClinicDB. In the polyp segmentation task, the Mean Intersection over Union value of CSUNet on these datasets reached 87.78% and 92.07%, respectively. The effectiveness of the proposed method enables the model to show strong segmentation performance and excellent visual results. Experiments on the collected public large-scale datasets are better than other state-of-the-art methods.
引用
收藏
页码:8445 / 8456
页数:12
相关论文
共 41 条
[1]   WM-DOVA maps for accurate polyp highlighting in colonoscopy: Validation vs. saliency maps from physicians [J].
Bernal, Jorge ;
Javier Sanchez, F. ;
Fernandez-Esparrach, Gloria ;
Gil, Debora ;
Rodriguez, Cristina ;
Vilarino, Fernando .
COMPUTERIZED MEDICAL IMAGING AND GRAPHICS, 2015, 43 :99-111
[2]  
Bi W., 2021, IEEE ACCESS, V9, P130923, DOI [10.1109/ACCESS.2021.3116996, DOI 10.1109/ACCESS.2021.3116996]
[3]  
Cao Hu, 2023, Computer Vision - ECCV 2022 Workshops: Proceedings. Lecture Notes in Computer Science (13803), P205, DOI 10.1007/978-3-031-25066-8_9
[4]  
Chen J., 2022, IEEE T MED IMAGING, V41, P1262
[5]  
Chen J., 2021, ARXIV
[6]  
Deng-Ping Fan, 2020, Medical Image Computing and Computer Assisted Intervention - MICCAI 2020. 23rd International Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12266), P263, DOI 10.1007/978-3-030-59725-2_26
[7]  
Diakogiannis F. I., 2019, arXiv
[8]  
Ding X., 2022, P IEEE CVF C COMP VI, P1722, DOI [10.1109/TMI.2022.3193718, DOI 10.1109/TMI.2022.3193718]
[9]  
Duan S., 2021, IEEE T IMAGE PROCESS, V30, P342
[10]  
Finlay E.V., 2021, NAT REV GASTRO HEPAT, V18, P436, DOI [10.1038/s41575-021-00492-5, DOI 10.1038/S41575-021-00492-5]