MSRAformer: Multiscale spatial reverse attention network for polyp segmentation

被引:25
作者
Wu, Cong [1 ]
Long, Cheng [1 ]
Li, Shijun [1 ]
Yang, Junjie [2 ]
Jiang, Fagang [2 ]
Zhou, Ran [1 ]
机构
[1] Hubei Univ Technol, Sch Comp Sci, Wuhan, Peoples R China
[2] Huazhong Univ Sci & Technol, Union Hosp, Tongji Med Coll, Wuhan, Peoples R China
关键词
Polyp segmentation; Machine learning; Multiscale; Attention mechanism;
D O I
10.1016/j.compbiomed.2022.106274
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Colon polyp is an important reference basis in the diagnosis of colorectal cancer(CRC). In routine diagnosis, the polyp area is segmented from the colorectal enteroscopy image, and the obtained pathological information is used to assist in the diagnosis of the disease and surgery. It is always a challenging task for accurate segmentation of polyps in colonoscopy images. There are great differences in shape, size, color and texture of the same type of polyps, and it is difficult to distinguish the polyp region from the mucosal boundary. In recent years, convolutional neural network(CNN) has achieved some results in the task of medical image segmentation. However, CNNs focus on the extraction of local features and be short of the extracting ability of global feature information. This paper presents a Multiscale Spatial Reverse Attention Network called MSRAformer with high performance in medical segmentation, which adopts the Swin Transformer encoder with pyramid structure to extract the features of four different stages, and extracts the multi-scale feature information through the multi-scale channel attention module, which enhances the global feature extraction ability and generalization of the network, and preliminarily aggregates a pre-segmentation result. This paper proposes a spatial reverse attention mechanism module to gradually supplement the edge structure and detail information of the polyp region. Extensive experiments on MSRAformer proved that the segmentation effect on the colonoscopy polyp dataset is better than most state-of-the-art(SOTA) medical image segmentation methods, with better generalization performance. Reference implementation of MSRAformer is available at https://github.com/ChengLong1222/MSRAformer-main.
引用
收藏
页数:8
相关论文
共 40 条
[1]   WM-DOVA maps for accurate polyp highlighting in colonoscopy: Validation vs. saliency maps from physicians [J].
Bernal, Jorge ;
Javier Sanchez, F. ;
Fernandez-Esparrach, Gloria ;
Gil, Debora ;
Rodriguez, Cristina ;
Vilarino, Fernando .
COMPUTERIZED MEDICAL IMAGING AND GRAPHICS, 2015, 43 :99-111
[2]   RRU-Net: The Ringed Residual U-Net for Image Splicing Forgery Detection [J].
Bi, Xiuli ;
Wei, Yang ;
Xiao, Bin ;
Li, Weisheng .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2019), 2019, :30-39
[3]   Fully Convolutional Neural Networks for Polyp Segmentation in Colonoscopy [J].
Brandao, Patrick ;
Mazomenos, Evangelos ;
Ciuti, Gastone ;
Calio, Renato ;
Bianchi, Federico ;
Menciassi, Arianna ;
Dario, Paolo ;
Koulaouzidis, Anastasios ;
Arezzo, Alberto ;
Stoyanov, Danail .
MEDICAL IMAGING 2017: COMPUTER-AIDED DIAGNOSIS, 2017, 10134
[4]  
Chen J., 2021, arXiv, DOI DOI 10.48550/ARXIV.2102.04306
[5]   Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation [J].
Chen, Liang-Chieh ;
Zhu, Yukun ;
Papandreou, George ;
Schroff, Florian ;
Adam, Hartwig .
COMPUTER VISION - ECCV 2018, PT VII, 2018, 11211 :833-851
[6]   Reverse Attention for Salient Object Detection [J].
Chen, Shuhan ;
Tan, Xiuli ;
Wang, Ben ;
Hu, Xuelong .
COMPUTER VISION - ECCV 2018, PT IX, 2018, 11213 :236-252
[7]   TransMed: Transformers Advance Multi-Modal Medical Image Classification [J].
Dai, Yin ;
Gao, Yifan ;
Liu, Fayu .
DIAGNOSTICS, 2021, 11 (08)
[8]  
Deng-Ping Fan, 2020, Medical Image Computing and Computer Assisted Intervention - MICCAI 2020. 23rd International Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12266), P263, DOI 10.1007/978-3-030-59725-2_26
[9]  
Dosovitskiy A, 2021, Arxiv, DOI [arXiv:2010.11929, DOI 10.48550/ARXIV.2010.11929]
[10]  
Fan DP, 2018, Arxiv, DOI arXiv:1805.10421