Combining Swin Transformer and Attention-Weighted Fusion for Scene Text Detection

被引:0
作者
Xianguo Li
Xingchen Yao
Yi Liu
机构
[1] Tiangong University,School of Electronics and Information Engineering
[2] Tianjin Key Laboratory of Optoelectronic Detection Technology and System,Center for Engineering Internship and Training
[3] Tiangong University,undefined
来源
Neural Processing Letters | / 56卷
关键词
Scene text detection; Swin transformer; Attention-weighted fusion; Global feature perception;
D O I
暂无
中图分类号
学科分类号
摘要
The existing text detection algorithms based on Convolutional Neural Networks (CNN) commonly have the problems of insufficient receptive fields and inadequate extraction of spatial positional information, which limit their ability to detect large-scale variation text instances, long-distance and wide-spaced text instances as well as effectively distinguish complex background textures. To address the above problems, in this paper, a scene text detection algorithm combining Swin Transformer and attention-weighted fusion is proposed. Firstly, an attention-weighted fusion (AWF) module is proposed, which embeds a modified coordinate attention module (CAM) in the feature pyramid network (FPN). This module learns spatial positional weights of foreground information in different-scale features while suppressing redundant background information. As a result, the fused features are more focused on the text regions, enhancing the localization ability for text regions and boundaries. Secondly, the window-based self-attention mechanism of the Swin Transformer is utilized to achieve global feature perception on the fused features of the pyramid network. This compensates for the insufficient receptive fields of CNN and enhances the representation capability of global contextual features, thereby further improving the performance of text detection. Experimental results demonstrate that the proposed algorithm achieves competitive performance on three public datasets, namely ICDAR2015, MSRA-TD500, and Total-Text, with F-measure reaching 87.9%, 91.4%, and 86.7%, respectively. Code is available at: https://github.com/xgli411/ST-AWFNet.
引用
收藏
相关论文
共 41 条
[21]  
Zhang S(undefined)undefined undefined undefined undefined-undefined
[22]  
Liu Y(undefined)undefined undefined undefined undefined-undefined
[23]  
Jin L(undefined)undefined undefined undefined undefined-undefined
[24]  
Wei Z(undefined)undefined undefined undefined undefined-undefined
[25]  
Shen C(undefined)undefined undefined undefined undefined-undefined
[26]  
Cai Y(undefined)undefined undefined undefined undefined-undefined
[27]  
Liu YY(undefined)undefined undefined undefined undefined-undefined
[28]  
Shen C(undefined)undefined undefined undefined undefined-undefined
[29]  
Jin L(undefined)undefined undefined undefined undefined-undefined
[30]  
Li Y(undefined)undefined undefined undefined undefined-undefined