Characters as graphs: Interpretable handwritten Chinese character recognition via Pyramid Graph Transformer

被引:13
作者
Gan, Ji [1 ,2 ]
Chen, Yuyan [1 ]
Hu, Bo [1 ,2 ]
Leng, Jiaxu [1 ,2 ]
Wang, Weiqiang [3 ]
Gao, Xinbo [1 ,2 ]
机构
[1] Chongqing Univ Posts & Telecommun, Coll Comp Sci & Technol, Chongqing, Peoples R China
[2] Chongqing Inst Brain & Intelligence, Guangyang Bay Lab, Chongqing, Peoples R China
[3] Univ Chinese Acad Sci, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
Handwritten Chinese character Recognition; Transformer; Graph convolutional network; Pyramid graph; ONLINE; REPRESENTATION; EXTRACTION;
D O I
10.1016/j.patcog.2023.109317
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
It is meaningful but challenging to teach machines to recognize handwritten Chinese characters. However, conventional approaches typically view handwritten Chinese characters as either static images or tempo-ral trajectories, which may ignore the inherent geometric semantics of characters. Instead, here we first propose to represent handwritten characters as skeleton graphs, explicitly considering the natural charac-teristics of characters (i.e., characters as graphs). Furthermore, we propose a novel Pyramid Graph Trans-former (PyGT) to specifically process the graph-structured characters, which fully integrates the advan-tages of Transformers and graph convolutional networks. Specifically, our PyGT can learn better graph fea-tures through (i) capturing the global information from all nodes with graph attention mechanism and (ii) modelling the explicit local adjacency structures of nodes with graph convolutions. Furthermore, the PyGT learns the multi-resolution features by constructing a progressive shrinking pyramid. Compared with ex-isting approaches, it is more interpretable to recognize characters as geometric graphs. Moreover, the pro-posed method is generic for both online and offline handwritten Chinese character recognition (HCCR), and it also can be feasibly extended to handwritten text recognition. Extensive experiments empirically demonstrate the superiority of PyGT over the prevalent approaches including 2D-CNN, RNN/1D-CNN, and Vision Transformer (ViT) for HCCR. The code is available at https://github.com/ganji15/PyGT-HCCR .& COPY; 2023 Elsevier Ltd. All rights reserved.
引用
收藏
页数:13
相关论文
共 40 条
[1]   SLIC Superpixels Compared to State-of-the-Art Superpixel Methods [J].
Achanta, Radhakrishna ;
Shaji, Appu ;
Smith, Kevin ;
Lucchi, Aurelien ;
Fua, Pascal ;
Suesstrunk, Sabine .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2012, 34 (11) :2274-2281
[2]  
Ba JL, 2016, arXiv
[3]  
Bahdanau D, 2016, Arxiv, DOI [arXiv:1409.0473, 10.48550/arXiv.1409.0473,1409.0473, DOI 10.48550/ARXIV.1409.0473,1409.0473]
[4]  
Bang Li, 2020, Journal of Physics: Conference Series, V1651, DOI 10.1088/1742-6596/1651/1/012050
[5]   End-to-End Object Detection with Transformers [J].
Carion, Nicolas ;
Massa, Francisco ;
Synnaeve, Gabriel ;
Usunier, Nicolas ;
Kirillov, Alexander ;
Zagoruyko, Sergey .
COMPUTER VISION - ECCV 2020, PT I, 2020, 12346 :213-229
[6]  
Chen H., 2022, Pattern Recognit., V130
[7]  
Chen KZ, 2019, AAAI CONF ARTIF INTE, P1336
[8]  
Defferrard M, 2016, ADV NEUR IN, V29
[9]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[10]  
Dhillon IS, 2007, IEEE T PATTERN ANAL, V29, P1944, DOI [10.1109/TPAMI.2007.1115, 10.1109/TP'AMI.2007.1115]