Text-Attentional Convolutional Neural Network for Scene Text Detection

被引:247
作者
He, Tong [1 ,2 ]
Huang, Weilin [1 ,3 ]
Qiao, Yu [1 ,3 ]
Yao, Jian [2 ]
机构
[1] Chinese Acad Sci, Shenzhen Inst Adv Technol, Shenzhen 518055, Peoples R China
[2] Wuhan Univ, Sch Remote Sensing & Informat Engn, Wuhan 430072, Peoples R China
[3] Chinese Univ Hong Kong, Multimedia Lab, Hong Kong, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Maximally stable extremal regions; text detector; convolutional neural networks; multi-level supervised information; multi-task learning; READING TEXT; LOCALIZATION;
D O I
10.1109/TIP.2016.2547588
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent deep learning models have demonstrated strong capabilities for classifying text and non-text components in natural images. They extract a high-level feature globally computed from a whole image component (patch), where the cluttered background information may dominate true text features in the deep representation. This leads to less discriminative power and poorer robustness. In this paper, we present a new system for scene text detection by proposing a novel text-attentional convolutional neural network (Text-CNN) that particularly focuses on extracting text-related regions and features from the image components. We develop a new learning mechanism to train the Text-CNN with multi-level and rich supervised information, including text region mask, character label, and binary text/non-text information. The rich supervision information enables the Text-CNN with a strong capability for discriminating ambiguous texts, and also increases its robustness against complicated background components. The training process is formulated as a multi-task learning problem, where low-level supervised information greatly facilitates the main task of text/non-text classification. In addition, a powerful low-level detector called contrast-enhancement maximally stable extremal regions (MSERs) is developed, which extends the widely used MSERs by enhancing intensity contrast between text patterns and background. This allows it to detect highly challenging text patterns, resulting in a higher recall. Our approach achieved promising results on the ICDAR 2013 data set, with an F-measure of 0.82, substantially improving the state-of-the-art results.
引用
收藏
页码:2529 / 2541
页数:13
相关论文
共 62 条
[1]  
[Anonymous], INT J ENG RES TECHNO
[2]  
Argyriou A., 2007, Advances in Neural Information Processing Systems, P41
[3]   Representation Learning: A Review and New Perspectives [J].
Bengio, Yoshua ;
Courville, Aaron ;
Vincent, Pascal .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (08) :1798-1828
[4]   PhotoOCR: Reading Text in Uncontrolled Conditions [J].
Bissacco, Alessandro ;
Cummins, Mark ;
Netzer, Yuval ;
Neven, Hartmut .
2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2013, :785-792
[5]  
Bosch A, 2007, IEEE I CONF COMP VIS, P1863
[6]   Random forests [J].
Breiman, L .
MACHINE LEARNING, 2001, 45 (01) :5-32
[7]   FASText: Efficient Unconstrained Scene Text Detector [J].
Busta, Michal ;
Neumann, Lukas ;
Matas, Jiri .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :1206-1214
[8]  
Chen H., 2012, P IEEE INT C IM PROC, P2609
[9]  
Chen XR, 2004, PROC CVPR IEEE, P366
[10]   Histograms of oriented gradients for human detection [J].
Dalal, N ;
Triggs, B .
2005 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOL 1, PROCEEDINGS, 2005, :886-893