Scene Text Image Super-Resolution in the Wild

被引:92
作者
Wang, Wenjia [1 ]
Xie, Enze [2 ]
Liu, Xuebo [1 ]
Wang, Wenhai [3 ]
Liang, Ding [1 ]
Shen, Chunhua [4 ]
Bai, Xiang [5 ]
机构
[1] SenseTime Res, Shatin, Hong Kong, Peoples R China
[2] Univ Hong Kong, Shatin, Hong Kong, Peoples R China
[3] Nanjing Univ, Nanjing, Peoples R China
[4] Univ Adelaide, Adelaide, SA, Australia
[5] Huazhong Univ Sci & Technol, Wuhan, Peoples R China
来源
COMPUTER VISION - ECCV 2020, PT X | 2020年 / 12355卷
关键词
Scene text recognition; Super-resolution; Dataset; Sequence; Boundary; RECOGNITION; NETWORK;
D O I
10.1007/978-3-030-58607-2_38
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Low-resolution text images are often seen in natural scenes such as documents captured by mobile phones. Recognizing low-resolution text images is challenging because they lose detailed content information, leading to poor recognition accuracy. An intuitive solution is to introduce super-resolution (SR) techniques as pre-processing. However, previous single image super-resolution (SISR) methods are trained on synthetic low-resolution images ( e.g. Bicubic down-sampling), which is simple and not suitable for real low-resolution text recognition. To this end, we propose a real scene text SR dataset, termed TextZoom. It contains paired real low-resolution and high-resolution images which are captured by cameras with different focal length in the wild. It is more authentic and challenging than synthetic data, as shown in Fig. 1. We argue improving the recognition accuracy is the ultimate goal for Scene Text SR. In this purpose, a new Text Super-Resolution Network, termed TSRN, with three novel modules is developed. (1) A sequential residual block is proposed to extract the sequential information of the text images. (2) A boundary-aware loss is designed to sharpen the character boundaries. (3) A central alignment module is proposed to relieve the misalignment problem in TextZoom. Extensive experiments on TextZoom demonstrate that our TSRN largely improves the recognition accuracy by over 13% of CRNN, and by nearly 9.0% of ASTER and MORAN compared to synthetic SR data. Furthermore, our TSRN clearly outperforms 7 state-of-the-art SR methods in boosting the recognition accuracy of LR images in TextZoom. For example, it outperforms LapSRN by over 5% and 8% on the recognition accuracy of ASTER and CRNN. Our results suggest that low-resolution text recognition in the wild is far from being solved, thus more research effort is needed. The codes and models will be released at: github.com/JasonBoy1/TextZoom
引用
收藏
页码:650 / 666
页数:17
相关论文
共 47 条
[21]  
Kim J, 2016, PROC CVPR IEEE, P1637, DOI [10.1109/CVPR.2016.182, 10.1109/CVPR.2016.181]
[22]  
Pandey RK, 2018, Arxiv, DOI arXiv:1812.02475
[23]   Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution [J].
Lai, Wei-Sheng ;
Huang, Jia-Bin ;
Ahuja, Narendra ;
Yang, Ming-Hsuan .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :5835-5843
[24]  
Leal-Taixe L., 2019, ECCV 2018, V11131, DOI [10.1007/978-3-030-11015-4, DOI 10.1007/978-3-030-11015-4]
[25]   Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network [J].
Ledig, Christian ;
Theis, Lucas ;
Huszar, Ferenc ;
Caballero, Jose ;
Cunningham, Andrew ;
Acosta, Alejandro ;
Aitken, Andrew ;
Tejani, Alykhan ;
Totz, Johannes ;
Wang, Zehan ;
Shi, Wenzhe .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :105-114
[26]   Enhanced Deep Residual Networks for Single Image Super-Resolution [J].
Lim, Bee ;
Son, Sanghyun ;
Kim, Heewon ;
Nah, Seungjun ;
Lee, Kyoung Mu .
2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2017, :1132-1140
[27]  
Liu W., 2016, P BRIT MACH VIS C
[28]  
Liu ZC, 2018, AAAI CONF ARTIF INTE, P7194
[29]  
Long SB, 2020, Arxiv, DOI arXiv:1811.04256
[30]   MORAN: A Multi-Object Rectified Attention Network for scene text recognition [J].
Luo, Canjie ;
Jin, Lianwen ;
Sun, Zenghui .
PATTERN RECOGNITION, 2019, 90 :109-118