Predicting Human Intentions in Human-Robot Hand-Over Tasks Through Multimodal Learning

被引:49
作者
Wang, Weitian [1 ]
Li, Rui [1 ]
Chen, Yi [2 ]
Sun, Yi [2 ]
Jia, Yunyi [2 ]
机构
[1] Montclair State Univ, Dept Comp Sci, Montclair, NJ 07043 USA
[2] Clemson Univ, Dept Automot Engn, Greenville, SC 29607 USA
基金
美国国家科学基金会;
关键词
Robots; Task analysis; Robot sensing systems; Collaboration; Education; Cognition; Tools; Extreme learning machine (ELM); human-robot hand-over; intention prediction; learning from demonstrations; natural language; wearable sensors; MACHINE; COLLABORATION; NETWORKS; TRACKING;
D O I
10.1109/TASE.2021.3074873
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In human-robot shared manufacturing contexts, product parts or tools hand-over between the robot and the human is an important collaborative task. Facilitating the robot to figure out and predict human hand-over intentions correctly to improve the task efficiency in human-robot collaboration is therefore a necessary issue to be addressed. In this study, a teaching-learning-prediction (TLP) framework is proposed for the robot to learn from its human partner's multimodal demonstrations and predict human hand-over intentions. In this approach, the robot can be programmed by the human through demonstrations utilizing natural language and wearable sensors according to task requirements and the human's working preferences. Then the robot learns from human hand-over demonstrations online via extreme learning machine (ELM) algorithms to update its cognition capacity, allowing the robot to use its learned policy to predict human intentions actively and assist its human companion in hand-over tasks. Experimental results and evaluations suggest that the human may program the robot easily by the proposed approach when the task changes, as the robot can effectively predict hand-over intentions with competitive accuracy to complete the hand-over tasks.
引用
收藏
页码:2339 / 2353
页数:15
相关论文
共 57 条
[41]   Modeling and prediction of human behavior [J].
Pentland, A ;
Liu, A .
NEURAL COMPUTATION, 1999, 11 (01) :229-242
[42]   Robot-Centric Activity Prediction from First-Person Videos: What Will They Do to Me? [J].
Ryoo, M. S. ;
Fuchs, Thomas J. ;
Xia, Lu ;
Aggarwa, J. K. ;
Matthies, Larry .
PROCEEDINGS OF THE 2015 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI'15), 2015, :295-302
[43]  
She Lanbo, 2014, P 15 ANN M SPEC INT, P89, DOI 10.3115/v1/w14-4313
[44]  
Song D, 2013, IEEE INT CONF ROBOT, P1608
[45]   Least squares support vector machine classifiers [J].
Suykens, JAK ;
Vandewalle, J .
NEURAL PROCESSING LETTERS, 1999, 9 (03) :293-300
[46]  
Tanaka Y, 2012, IEEE INT C INT ROBOT, P1525, DOI 10.1109/IROS.2012.6386043
[47]   Extreme Learning Machine for Multilayer Perceptron [J].
Tang, Jiexiong ;
Deng, Chenwei ;
Huang, Guang-Bin .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2016, 27 (04) :809-821
[48]  
Wang W., 2020, RECENT ADV IND ROBOT, P87
[49]  
WANG W, 2018, J ROBOT
[50]   Facilitating Human-Robot Collaborative Tasks by Teaching-Learning-Collaboration From Human Demonstrations [J].
Wang, Weitian ;
Li, Rui ;
Chen, Yi ;
Diekel, Z. Max ;
Jia, Yunyi .
IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2019, 16 (02) :640-653