On the role of multimodal learning in the recognition of sign language

被引:15
作者
Ferreira, Pedro M. [1 ,3 ]
Cardoso, Jaime S. [2 ,4 ]
Rebelo, Ana [5 ,6 ]
机构
[1] INESC TEC, Porto, Portugal
[2] INESC TEC, Informat Proc & Pattern Recognit Area, Ctr Telecommun & Multimedia, Porto, Portugal
[3] Univ Porto, Porto, Portugal
[4] Univ Porto, Fac Engn, Habilitat, Porto, Portugal
[5] INESC TEC, Oporto, Portugal
[6] Univ Portucalense, Oporto, Portugal
关键词
Sign language recognition; Multimodal learning; Convolutional neural networks; Kinect; Leap motion;
D O I
10.1007/s11042-018-6565-5
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Sign Language Recognition (SLR) has become one of the most important research areas in the field of human computer interaction. SLR systems are meant to automatically translate sign language into text or speech, in order to reduce the communicational gap between deaf and hearing people. The aim of this paper is to exploit multimodal learning techniques for an accurate SLR, making use of data provided by Kinect and Leap Motion. In this regard, single-modality approaches as well as different multimodal methods, mainly based on convolutional neural networks, are proposed. Our main contribution is a novel multimodal end-to-end neural network that explicitly models private feature representations that are specific to each modality and shared feature representations that are similar between modalities. By imposing such regularization in the learning process, the underlying idea is to increase the discriminative ability of the learned features and, hence, improve the generalization capability of the model. Experimental results demonstrate that multimodal learning yields an overall improvement in the sign recognition performance. In particular, the novel neural network architecture outperforms the current state-of-the-art methods for the SLR task.
引用
收藏
页码:10035 / 10056
页数:22
相关论文
共 31 条
[11]   Combining multiple depth-based descriptors for hand gesture recognition [J].
Dominio, Fabio ;
Donadeo, Mauro ;
Zanuttigh, Pietro .
PATTERN RECOGNITION LETTERS, 2014, 50 :101-111
[12]   Multimodal Learning for Sign Language Recognition [J].
Ferreira, Pedro M. ;
Cardoso, Jaime S. ;
Rebelo, Ana .
PATTERN RECOGNITION AND IMAGE ANALYSIS (IBPRIA 2017), 2017, 10255 :313-321
[13]   A Novel Image Tag Completion Method Based on Convolutional Neural Transformation [J].
Geng, Yanyan ;
Zhang, Guohui ;
Li, Weizhi ;
Gu, Yi ;
Liang, Ru-Ze ;
Liang, Gaoyuan ;
Wang, Jingbin ;
Wu, Yanbin ;
Patil, Nitin ;
Wang, Jing-Yan .
ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, PT II, 2017, 10614 :539-546
[14]  
Huang Chen, 2016, ADV NEURAL INFORM PR
[15]  
Kurakin A, 2012, EUR SIGNAL PR CONF, P1975
[16]   Deep learning for detecting robotic grasps [J].
Lenz, Ian ;
Lee, Honglak ;
Saxena, Ashutosh .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2015, 34 (4-5) :705-724
[17]  
Marin G, 2014, IEEE IMAGE PROC, P1565, DOI 10.1109/ICIP.2014.7025313
[18]   Hand gesture recognition with jointly calibrated Leap Motion and depth sensor [J].
Marin, Giulio ;
Dominio, Fabio ;
Zanuttigh, Pietro .
MULTIMEDIA TOOLS AND APPLICATIONS, 2016, 75 (22) :14991-15015
[19]  
Potter L E, 2013, P 25 AUSTR COMP HUM, P175, DOI DOI 10.1145/2541016.2541072
[20]   Deep Multimodal Learning A survey on recent advances and trends [J].
Ramachandram, Dhanesh ;
Taylor, Graham W. .
IEEE SIGNAL PROCESSING MAGAZINE, 2017, 34 (06) :96-108