Combining Swin Transformer and Attention-Weighted Fusion for Scene Text Detection

被引:0
作者
Xianguo Li
Xingchen Yao
Yi Liu
机构
[1] Tiangong University,School of Electronics and Information Engineering
[2] Tianjin Key Laboratory of Optoelectronic Detection Technology and System,Center for Engineering Internship and Training
[3] Tiangong University,undefined
来源
Neural Processing Letters | / 56卷
关键词
Scene text detection; Swin transformer; Attention-weighted fusion; Global feature perception;
D O I
暂无
中图分类号
学科分类号
摘要
The existing text detection algorithms based on Convolutional Neural Networks (CNN) commonly have the problems of insufficient receptive fields and inadequate extraction of spatial positional information, which limit their ability to detect large-scale variation text instances, long-distance and wide-spaced text instances as well as effectively distinguish complex background textures. To address the above problems, in this paper, a scene text detection algorithm combining Swin Transformer and attention-weighted fusion is proposed. Firstly, an attention-weighted fusion (AWF) module is proposed, which embeds a modified coordinate attention module (CAM) in the feature pyramid network (FPN). This module learns spatial positional weights of foreground information in different-scale features while suppressing redundant background information. As a result, the fused features are more focused on the text regions, enhancing the localization ability for text regions and boundaries. Secondly, the window-based self-attention mechanism of the Swin Transformer is utilized to achieve global feature perception on the fused features of the pyramid network. This compensates for the insufficient receptive fields of CNN and enhances the representation capability of global contextual features, thereby further improving the performance of text detection. Experimental results demonstrate that the proposed algorithm achieves competitive performance on three public datasets, namely ICDAR2015, MSRA-TD500, and Total-Text, with F-measure reaching 87.9%, 91.4%, and 86.7%, respectively. Code is available at: https://github.com/xgli411/ST-AWFNet.
引用
收藏
相关论文
共 41 条
[1]  
Ren S(2017)Faster r-cnn: towards real-time object detection with region proposal networks IEEE Trans Pattern Anal Mach Intell 39 1137-1149
[2]  
He K(2018)Textboxes++: a single-shot oriented scene text detector IEEE Trans Image Process 27 3676-3690
[3]  
Girshick R(2020)TextFuseNet: scene text detection with richer fused features Proc IJCAI 20 516-522
[4]  
Sun J(2022)Real-time scene text detection with differentiable binarization and adaptive scale fusion IEEE Trans Pattern Anal Mach Intell 45 919-931
[5]  
Liao M(1992)A generic solution to polygon clipping Commun ACM 35 56-63
[6]  
Shi B(2014)A unified framework for multi oriented text detection and recognition Image Process IEEE Trans 23 4737-4749
[7]  
Bai X(2020)OPMP: an omnidirectional pyramid mask proposal network for arbitrary-shape scene text detection IEEE Trans Multimed 23 454-467
[8]  
Ye J(2022)Arbitrarily shaped scene text detection with dynamic convolution Pattern Recognit 127 108608-12
[9]  
Chen Z(2022)Fuzzy semantics for arbitrary-shaped scene text detection IEEE Trans Image Process 32 1-24
[10]  
Liu J(2023)Learning pixel affinity pyramid for arbitrary-shaped text detection ACM Trans Multimed Comput 19 1-undefined