Efficient and Accurate Arbitrary-Shaped Text Detection with Pixel Aggregation Network

被引:361
作者
Wang, Wenhai [1 ]
Xie, Enze [2 ,4 ]
Song, Xiaoge [1 ]
Zang, Yuhang [3 ]
Wang, Wenjia [2 ]
Lu, Tong [1 ]
Yu, Gang [4 ]
Shen, Chunhua [5 ]
机构
[1] Nanjing Univ, Natl Key Lab Novel Software Technol, Nanjing, Peoples R China
[2] Tongji Univ, Shanghai, Peoples R China
[3] Univ Elect Sci & Technol China, Beijing, Peoples R China
[4] Megvii Face Technol Inc, Beijing, Peoples R China
[5] Univ Adelaide, Adelaide, SA, Australia
来源
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019) | 2019年
关键词
D O I
10.1109/ICCV.2019.00853
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Scene text detection, an important step of scene text reading systems, has witnessed rapid development with convolutional neural networks. Nonetheless, two main challenges still exist and hamper its deployment to real-world applications. The first problem is the trade-off between speed and accuracy. The second one is to model the arbitrary-shaped text instance. Recently, some methods have been proposed to tackle arbitrary-shaped text detection, but they rarely take the speed of the entire pipeline into consideration, which may fall short in practical applications. In this paper, we propose an efficient and accurate arbitrary-shaped text detector, termed Pixel Aggregation Network (PAN), which is equipped with a low computational-cost segmentation head and a learnable post-processing. More specifically, the segmentation head is made up of Feature Pyramid Enhancement Module (FPEM) and Feature Fusion Module (FFM). FPEM is a cascadable U-shaped module, which can introduce multi-level information to guide the better segmentation. FFM can gather the features given by the FPEMs of different depths into a final feature for segmentation. The learnable post-processing is implemented by Pixel Aggregation (PA), which can precisely aggregate text pixels by predicted similarity vectors. Experiments on several standard benchmarks validate the superiority of the proposed PAN. It is worth noting that our method can achieve a competitive F-measure of 79.9% at 84.2 FPS on CTW1500.
引用
收藏
页码:8439 / 8448
页数:10
相关论文
共 55 条
  • [1] [Anonymous], 2018, IEEE T IMAGE PROCESS
  • [2] [Anonymous], 2018, ARXIV180602559
  • [3] [Anonymous], 2012, P CVPR
  • [4] Boski M, 2017, 2017 10TH INTERNATIONAL WORKSHOP ON MULTIDIMENSIONAL (ND) SYSTEMS (NDS)
  • [5] Chen LB, 2017, IEEE INT SYMP NANO, P1, DOI 10.1109/NANOARCH.2017.8053709
  • [6] Chng Chee Kheng, 2017, P INT C DOC AN REC
  • [7] Deng D, 2018, AAAI CONF ARTIF INTE, P6773
  • [8] Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
  • [9] Fan D.-P., 2019, ARXIV190706781
  • [10] Structure-measure: A New Way to Evaluate Foreground Maps
    Fan, Deng-Ping
    Cheng, Ming-Ming
    Liu, Yun
    Li, Tao
    Borji, Ali
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 4558 - 4567