Multi-modal Text Recognition Networks: Interactive Enhancements Between Visual and Semantic Features

被引:45
作者
Na, Byeonghu [1 ]
Kim, Yoonsik [2 ]
Park, Sungrae [3 ]
机构
[1] Korea Adv Inst Sci & Technol, Dept Ind & Syst Engn, Daejeon, South Korea
[2] NAVER Corp, Clova AI Res, Seongnam Si, South Korea
[3] Upstage, Upstage AI Res, Yongin, South Korea
来源
COMPUTER VISION - ECCV 2022, PT XXVIII | 2022年 / 13688卷
关键词
D O I
10.1007/978-3-031-19815-1_26
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Linguistic knowledge has brought great benefits to scene text recognition by providing semantics to refine character sequences. However, since linguistic knowledge has been applied individually on the output sequence, previous methods have not fully utilized the semantics to understand visual clues for text recognition. This paper introduces a novel method, called Multi-modAl Text Recognition Network (MATRN), that enables interactions between visual and semantic features for better recognition performances. Specifically, MATRN identifies visual and semantic feature pairs and encodes spatial information into semantic features. Based on the spatial encoding, visual and semantic features are enhanced by referring to related features in the other modality. Furthermore, MATRN stimulates combining semantic features into visual features by hiding visual clues related to the character in the training phase. Our experiments demonstrate that MATRN achieves state-of-the-art performances on seven benchmarks with large margins, while naive combinations of two modalities show less-effective improvements. Further ablative studies prove the effectiveness of our proposed components. Our implementation is available at https://github.com/wp03052/MATRN.
引用
收藏
页码:446 / 463
页数:18
相关论文
共 37 条
[1]   Vision Transformer for Fast and Efficient Scene Text Recognition [J].
Atienza, Rowel .
DOCUMENT ANALYSIS AND RECOGNITION - ICDAR 2021, PT I, 2021, 12821 :319-334
[2]   What Is Wrong With Scene Text Recognition Model Comparisons? Dataset and Model Analysis [J].
Baek, Jeonghun ;
Kim, Geewook ;
Lee, Junyeop ;
Park, Sungrae ;
Han, Dongyoon ;
Yun, Sangdoo ;
Oh, Seong Joon ;
Lee, Hwalsuk .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :4714-4722
[3]   Joint Visual Semantic Reasoning: Multi-Stage Decoder for Text Recognition [J].
Bhunia, Ayan Kumar ;
Sain, Aneeshan ;
Kumar, Amandeep ;
Ghose, Shuvozit ;
Chowdhury, Pinaki Nath ;
Song, Yi-Zhe .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :14920-14929
[4]  
Chen SZ, 2021, ADV NEUR IN, V34
[5]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[6]  
Dosovitskiy A., 2021, P 9 INT C LEARN REPR
[7]   Read Like Humans: Autonomous, Bidirectional and Iterative Language Modeling for Scene Text Recognition [J].
Fang, Shancheng ;
Xie, Hongtao ;
Wang, Yuxin ;
Mao, Zhendong ;
Zhang, Yongdong .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :7094-7103
[8]  
Fenfen Sheng, 2019, 2019 International Conference on Document Analysis and Recognition (ICDAR). Proceedings, P781, DOI 10.1109/ICDAR.2019.00130
[9]   Multi-modal Transformer for Video Retrieval [J].
Gabeur, Valentin ;
Sun, Chen ;
Alahari, Karteek ;
Schmid, Cordelia .
COMPUTER VISION - ECCV 2020, PT IV, 2020, 12349 :214-229
[10]   Synthetic Data for Text Localisation in Natural Images [J].
Gupta, Ankush ;
Vedaldi, Andrea ;
Zisserman, Andrew .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :2315-2324