Isolated Sign Recognition from RGB Video using Pose Flow and Self-Attention

被引:43
作者
De Coster, Mathieu [1 ]
Van Herreweghe, Mieke [2 ]
Dambre, Joni [1 ]
机构
[1] Univ Ghent, IMEC, IDLab AIRO, Technol Pk Zwijnaarde 126, Ghent, Belgium
[2] Univ Ghent, Blandijnberg 2, Ghent, Belgium
来源
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021 | 2021年
基金
芬兰科学院; 欧盟地平线“2020”;
关键词
LANGUAGE;
D O I
10.1109/CVPRW53098.2021.00383
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Automatic sign language recognition lies at the intersection of natural language processing (NLP) and computer vision. The highly successful transformer architectures, based on multi-head attention, originate from the field of NLP. The Video Transformer Network (VTN) is an adaptation of this concept for tasks that require video understanding, e.g., action recognition. However, due to the limited amount of labeled data that is commonly available for training automatic sign (language) recognition, the VTN cannot reach its full potential in this domain. In this work, we reduce the impact of this data limitation by automatically pre-extracting useful information from the sign language videos. In our approach, different types of information are offered to a VTN in a multi-modal setup. It includes per-frame human pose keypoints (extracted by OpenPose) to capture the body movement and hand crops to capture the (evolution of) hand shapes. We evaluate our method on the recently released AUTSL dataset for isolated sign recognition and obtain 92.92% accuracy on the test set using only RGB data. For comparison: the VTN architecture without hand crops and pose flow achieved 82% accuracy. A qualitative inspection of our model hints at further potential of multi-modal multi-head attention in a sign language recognition context.
引用
收藏
页码:3436 / 3445
页数:10
相关论文
共 35 条
[11]  
De Coster M, 2020, PROCEEDINGS OF THE 12TH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION (LREC 2020), P6018
[12]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[13]  
Grobel K, 1997, IEEE SYS MAN CYBERN, P162, DOI 10.1109/ICSMC.1997.625742
[14]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[15]   Single-Network Whole-Body Pose Estimation [J].
Hidalgo, Gines ;
Raaj, Yaadhav ;
Idrees, Haroon ;
Xiang, Donglai ;
Joo, Hanbyul ;
Simon, Tomas ;
Sheikh, Yaser .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :6981-6990
[16]   Skeleton Aware Multi-modal Sign Language Recognition [J].
Jiang, Songyao ;
Sun, Bin ;
Wang, Lichen ;
Bai, Yue ;
Li, Kunpeng ;
Fu, Yun .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, :3408-3418
[17]  
Joze H. R. V., 2019, BRIT MACH VIS C BMVC, P1
[18]  
Jt, 1960, STUDIES LINGUISTICS, V8
[19]   Sign Language Recognition with Recurrent Neural Network using Human Keypoint Detection [J].
Ko, Sang-Ki ;
Son, Jae Gi ;
Jung, Hyedong .
PROCEEDINGS OF THE 2018 CONFERENCE ON RESEARCH IN ADAPTIVE AND CONVERGENT SYSTEMS (RACS 2018), 2018, :326-328
[20]   Neural Sign Language Translation Based on Human Keypoint Estimation [J].
Ko, Sang-Ki ;
Kim, Chang Jo ;
Jung, Hyedong ;
Cho, Choongsang .
APPLIED SCIENCES-BASEL, 2019, 9 (13)