Perceiving Multiple Representations for scene text image super-resolution guided by text recognizer

被引:3
作者
Shi, Qin [1 ,4 ]
Zhu, Yu [1 ]
Liu, Yatong [1 ]
Ye, Jiongyao [1 ]
Yang, Dawei [2 ,3 ,4 ]
机构
[1] East China Univ Sci & Technol, Sch Informat Sci & Engn, Shanghai 200237, Peoples R China
[2] Fudan Univ, Zhongshan Hosp, Dept Pulm & Crit Care Med, Shanghai 200032, Peoples R China
[3] Fudan Univ, Zhongshan Hosp Xiamen, Dept Pulm & Crit Care Med, Shanghai 361015, Peoples R China
[4] Shanghai Engn Res Ctr Internet Things Resp Med, Shanghai 200032, Peoples R China
基金
中国国家自然科学基金;
关键词
Scene text image super-resolution; Scene text recognition; Contextual information; Visual features; Frequency domain learning; NEURAL-NETWORK;
D O I
10.1016/j.engappai.2023.106551
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Single image super-resolution (SISR) aims to recover clear high-resolution images from low-resolution images, which has made great progress with the development of deep learning these years. Scene text image super -resolution (STISR) is a subfield of SISR with the goal of increasing the resolution of a low-resolution text image and enhancing the readability of characters in the image. Despite significant improvements in recent approaches, STISR remains a challenging task due to the diversity of background, text appearances and layouts, etc. This paper presents a Perceiving Multiple Representations (PerMR) method for better super -resolution performances in scene text images. PerMR is a unified network that combines super-resolution with text recognition and exploits the recognizer's feedback to facilitate super-resolution. Specifically, contextual information from the text decoder is extracted to provide sequence-specific guidance and enable the super -resolution model to pay more attention to the text region. Meanwhile, low-level and high-level visual features from the vision backbone of the recognition network are integrated to further improve visual quality. Additionally, we incorporate a frequency branch into the vanilla convolution unit, which efficiently enhances global and local feature representations. Experiments on the STISR benchmark dataset TextZoom validate that PerMR can not only generate more distinguishable images, but also outperforms the current state-of-the-art methods. PerMR boosts the average recognition accuracy by 5.9% using ASTER, 5.8% using MORAN and 10.6% using CRNN compared to the baseline model TSRN. PerMR outperforms the advanced method TPGSR-3 by 1.4% on ASTER, 0.1% on MORAN, 0.2% on CRNN and boosts TATT by 0.6% on ASTER and 1.1% on MORAN respectively. Furthermore, PerMR demonstrates good robustness and generalization when tackling low-quality text images in multiple scene text recognition datasets. The experiment results verify the capabilities of PerMR to boost text recognition performance.
引用
收藏
页数:13
相关论文
共 51 条
  • [11] TSRGAN: Real-world text image super-resolution based on adversarial learning and triplet attention
    Fang, Chuantao
    Zhu, Yu
    Liao, Lei
    Ling, Xiaofeng
    [J]. NEUROCOMPUTING, 2021, 455 : 88 - 96
  • [12] An automatic road sign recognition system based on a computational model of human recognition processing
    Fang, CY
    Fuh, CS
    Yen, PS
    Cherng, S
    Chen, SW
    [J]. COMPUTER VISION AND IMAGE UNDERSTANDING, 2004, 96 (02) : 237 - 268
  • [13] Read Like Humans: Autonomous, Bidirectional and Iterative Language Modeling for Scene Text Recognition
    Fang, Shancheng
    Xie, Hongtao
    Wang, Yuxin
    Mao, Zhendong
    Zhang, Yongdong
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 7094 - 7103
  • [14] Frequency Separation for Real-World Super-Resolution
    Fritsche, Manuel
    Gu, Shuhang
    Timofte, Radu
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, : 3599 - 3608
  • [15] Framewise phoneme classification with bidirectional LSTM and other neural network architectures
    Graves, A
    Schmidhuber, J
    [J]. NEURAL NETWORKS, 2005, 18 (5-6) : 602 - 610
  • [16] Deep Residual Learning for Image Recognition
    He, Kaiming
    Zhang, Xiangyu
    Ren, Shaoqing
    Sun, Jian
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 770 - 778
  • [17] Detecting Text in Scene and Traffic Guide Panels With Attention Anchor Mechanism
    Hou, Jie-Bo
    Zhu, Xiaobin
    Liu, Chang
    Yang, Chun
    Wu, Long-Huang
    Wang, Hongfa
    Yin, Xu-Cheng
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2021, 22 (11) : 6890 - 6899
  • [18] Hu J, 2018, PROC CVPR IEEE, P7132, DOI [10.1109/CVPR.2018.00745, 10.1109/TPAMI.2019.2913372]
  • [19] Jaderberg M, 2015, ADV NEUR IN, V28
  • [20] Kim J, 2016, PROC CVPR IEEE, P1637, DOI [10.1109/CVPR.2016.181, 10.1109/CVPR.2016.182]