Predicting Human Intentions in Human-Robot Hand-Over Tasks Through Multimodal Learning

被引:39
作者
Wang, Weitian [1 ]
Li, Rui [1 ]
Chen, Yi [2 ]
Sun, Yi [2 ]
Jia, Yunyi [2 ]
机构
[1] Montclair State Univ, Dept Comp Sci, Montclair, NJ 07043 USA
[2] Clemson Univ, Dept Automot Engn, Greenville, SC 29607 USA
基金
美国国家科学基金会;
关键词
Robots; Task analysis; Robot sensing systems; Collaboration; Education; Cognition; Tools; Extreme learning machine (ELM); human-robot hand-over; intention prediction; learning from demonstrations; natural language; wearable sensors; MACHINE; COLLABORATION; NETWORKS;
D O I
10.1109/TASE.2021.3074873
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In human-robot shared manufacturing contexts, product parts or tools hand-over between the robot and the human is an important collaborative task. Facilitating the robot to figure out and predict human hand-over intentions correctly to improve the task efficiency in human-robot collaboration is therefore a necessary issue to be addressed. In this study, a teaching-learning-prediction (TLP) framework is proposed for the robot to learn from its human partner's multimodal demonstrations and predict human hand-over intentions. In this approach, the robot can be programmed by the human through demonstrations utilizing natural language and wearable sensors according to task requirements and the human's working preferences. Then the robot learns from human hand-over demonstrations online via extreme learning machine (ELM) algorithms to update its cognition capacity, allowing the robot to use its learned policy to predict human intentions actively and assist its human companion in hand-over tasks. Experimental results and evaluations suggest that the human may program the robot easily by the proposed approach when the task changes, as the robot can effectively predict hand-over intentions with competitive accuracy to complete the hand-over tasks.
引用
收藏
页码:2339 / 2353
页数:15
相关论文
共 57 条
  • [1] Improving the Accuracy of Human Body Orientation Estimation With Wearable IMU Sensors
    Ahmed, Hamad
    Tahir, Muhammad
    [J]. IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2017, 66 (03) : 535 - 542
  • [2] Aleotti J., 2012, 2012 RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication, P771, DOI 10.1109/ROMAN.2012.6343845
  • [3] Learning Manipulation Tasks from Human Demonstration and 3D Shape Segmentation
    Aleotti, Jacopo
    Caselli, Stefano
    [J]. ADVANCED ROBOTICS, 2012, 26 (16) : 1863 - 1884
  • [4] CORPORATE DISTRESS DIAGNOSIS - COMPARISONS USING LINEAR DISCRIMINANT-ANALYSIS AND NEURAL NETWORKS (THE ITALIAN EXPERIENCE)
    ALTMAN, EI
    MARCO, G
    VARETTO, F
    [J]. JOURNAL OF BANKING & FINANCE, 1994, 18 (03) : 505 - 529
  • [5] [Anonymous], 2014, P 15 ANN M SPEC INT, DOI 10.3115/v1/w14-4313
  • [6] Human-Robot Interaction in an Unknown Human Intention scenario
    Awais, Muhammad
    Henrich, Dominik
    [J]. 2013 11TH INTERNATIONAL CONFERENCE ON FRONTIERS OF INFORMATION TECHNOLOGY (FIT), 2013, : 89 - 94
  • [7] Discriminative and adaptive imitation in uni-manual and bi-manual tasks
    Billard, Aude G.
    Calinon, Sylvain
    Guenter, Florent
    [J]. ROBOTICS AND AUTONOMOUS SYSTEMS, 2006, 54 (05) : 370 - 384
  • [8] Appropriate human involvement in assembly and disassembly
    Bley, H
    Reinhart, G
    Seliger, G
    Bernardi, M
    Korne, T
    [J]. CIRP ANNALS-MANUFACTURING TECHNOLOGY, 2004, 53 (02) : 487 - 509
  • [9] Bourne D, 2015, IEEE INT CONF ROBOT, P1565, DOI 10.1109/ICRA.2015.7139397
  • [10] A Hybrid Motion Classification Approach for EMG-Based Human-Robot Interfaces Using Bayesian and Neural Networks
    Bu, Nan
    Okamoto, Masaru
    Tsuji, Toshio
    [J]. IEEE TRANSACTIONS ON ROBOTICS, 2009, 25 (03) : 502 - 511