Gesture recognition for human-robot collaboration: A review

被引:257
作者
Liu, Hongyi [1 ]
Wang, Lihui [1 ]
机构
[1] KTH Royal Inst Technol, Dept Prod Engn, Stockholm, Sweden
关键词
Human-robot collaboration; Gesture; Gesture recognition; HIDDEN MARKOV-MODELS; NEURAL-NETWORKS; PARTICLE FILTERS; VISUAL TRACKING; IMPOVERISHMENT; REGRESSION; OBJECTS; PARTS;
D O I
10.1016/j.ergon.2017.02.004
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Recently, the concept of human-robot collaboration has raised many research interests. Instead of robots replacing human workers in workplaces, human-robot collaboration allows human workers and robots working together in a shared manufacturing environment. Human-robot collaboration can release human workers from heavy tasks with assistive robots if effective communication channels between humans and robots are established. Although the communication channels between human workers and robots are still limited, gesture recognition has been effectively applied as the interface between humans and computers for long time. Covering some of the most important technologies and algorithms of gesture recognition, this paper is intended to provide an overview of the gesture recognition research and explore the possibility to apply gesture recognition in human-robot collaborative manufacturing. In this paper, an overall model of gesture recognition for human-robot collaboration is also proposed. There are four essential technical components in the model of gesture recognition for human-robot collaboration: sensor technologies, gesture identification, gesture tracking and gesture classification. Reviewed approaches are classified according to the four essential technical components. Statistical analysis is also presented after technical analysis. Towards the end of this paper, future research trends are outlined. (C) 2017 Elsevier B.V. All rights reserved.
引用
收藏
页码:355 / 367
页数:13
相关论文
共 114 条
[81]   Deep learning [J].
Rusk, Nicole .
NATURE METHODS, 2016, 13 (01) :35-35
[82]   The boosting approach to machine learning: An overview [J].
Schapire, RE .
NONLINEAR ESTIMATION AND CLASSIFICATION, 2003, 171 :149-171
[83]   Deep learning in neural networks: An overview [J].
Schmidhuber, Juergen .
NEURAL NETWORKS, 2015, 61 :85-117
[84]  
Schölkopf B, 2001, ADV NEUR IN, V13, P301
[85]  
Scholkopf B., 1998, SUPPORT VECTOR MACHI
[86]   Real-Time Human Pose Recognition in Parts from Single Depth Images [J].
Shotton, Jamie ;
Sharp, Toby ;
Kipman, Alex ;
Fitzgibbon, Andrew ;
Finocchio, Mark ;
Blake, Andrew ;
Cook, Mat ;
Moore, Richard .
COMMUNICATIONS OF THE ACM, 2013, 56 (01) :116-124
[87]  
Simonyan K., 2014, P ANN C NEUR INF PRO, P568
[88]   Visual Tracking: An Experimental Survey [J].
Smeulders, Arnold W. M. ;
Chu, Dung M. ;
Cucchiara, Rita ;
Calderara, Simone ;
Dehghan, Afshin ;
Shah, Mubarak .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2014, 36 (07) :1442-1468
[89]   Electric field sensing for graphical interfaces [J].
Smith, J ;
White, T ;
Dodge, C ;
Paradiso, J ;
Gershenfeld, N ;
Allport, D .
IEEE COMPUTER GRAPHICS AND APPLICATIONS, 1998, 18 (03) :54-60
[90]   Real-time American sign language recognition using desk and wearable computer based video [J].
Starner, T ;
Weaver, J ;
Pentland, A .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1998, 20 (12) :1371-1375