An incremental cross-modal transfer learning method for gesture interaction

被引:3
作者
Zhong, Junpei [1 ]
Li, Jie [2 ]
Lotfi, Ahmad [3 ]
Liang, Peidong [4 ]
Yang, Chenguang [5 ]
机构
[1] Hong Kong Polytech Univ, Hong Kong, Peoples R China
[2] Chongqing Technol & Business Univ, Chongqing 400067, Peoples R China
[3] Nottingham Trent Univ, Nottingham NG11 8NS, England
[4] Quanzhou HIT Res Inst Engn & Technol, Quanzhou 362008, Peoples R China
[5] South China Univ Technol, Guangzhou 510640, Peoples R China
关键词
Transferlearning; Gesturerecognition; Multi-modal; EMG; Depthcamera; LeapMotion;
D O I
10.1016/j.robot.2022.104181
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Gesture can be used as an important way for human-robot interaction, since it is able to give accurate and intuitive instructions to the robots. Various sensors can be used to capture gestures. We apply three different sensors that can provide different modalities in recognizing human gestures. Such data also owns its own statistical properties for the purpose of transfer learning: they own the same labeled data, but both the source and the validation data-sets have their own statistical distributions. To tackle the transfer learning problem across different sensors with such kind of data-sets, we propose a weighting method to adjust the probability distributions of the data, which results in a more faster convergence result. We further apply this method in a broad learning system, which has proven to be efficient to learn with the incremental learning capability. The results show that although these three sensors measure different parts of the body using different technologies, transfer learning is able to find out the weighting correlation among the data-sets. It also suggests that using the proposed transfer learning is able to adjust the data which has different distributions which may be similar to the physical correlation between different parts of the body in the context of giving gestures. (c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页数:12
相关论文
共 55 条
[1]  
Rusu AA, 2016, Arxiv, DOI arXiv:1606.04671
[2]  
Addo ID, 2014, IEEE ROMAN, P423, DOI 10.1109/ROMAN.2014.6926289
[3]  
[Anonymous], 1972, Non-verbal communication
[4]  
[Anonymous], 2011, P 25 ANN C NEUR INF
[5]  
[Anonymous], 2008, P 23 ASS ADV ART INT
[6]  
[Anonymous], 2008, AAAI
[7]  
Bassily D., 2014, Isr Robitik, P78
[8]  
Blake J., 2005, GESTURE, V5, P201, DOI 10.1075/gest.5.1.14bla
[9]   Multisource Domain Adaptation and Its Application to Early Detection of Fatigue [J].
Chattopadhyay, Rita ;
Sun, Qian ;
Fan, Wei ;
Davidson, Ian ;
Panchanathan, Sethuraman ;
Ye, Jieping .
ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2012, 6 (04)
[10]   Broad Learning System: An Effective and Efficient Incremental Learning System Without the Need for Deep Architecture [J].
Chen, C. L. Philip ;
Liu, Zhulin .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (01) :10-24