Adaptive Multi-Task Human-Robot Interaction Based on Human Behavioral Intention

被引:4
作者
Fu, Jian [1 ]
Du, Jinyu [1 ]
Teng, Xiang [1 ]
Fu, Yuxiang [2 ]
Wu, Lu [3 ]
机构
[1] Wuhan Univ Technol, Sch Automat, Wuhan 430070, Peoples R China
[2] Univ British Columbia, Dept Comp Sci, Vancouver, BC V6T 1Z4, Canada
[3] Wuhan Univ Technol, Sch Informat, Wuhan 430070, Peoples R China
来源
IEEE ACCESS | 2021年 / 9卷
基金
中国国家自然科学基金;
关键词
Robots; Task analysis; Collaboration; Switches; Robot kinematics; Trajectory; Robot sensing systems; Human robot interaction; motion planning; MTProMP; MTiProMP; alternate learning; decomposition strategy; PROBABILISTIC MOVEMENT PRIMITIVES;
D O I
10.1109/ACCESS.2021.3115756
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Learning from demonstrations with Probabilistic Movement Primitives (ProMPs) has been widely used in robot skill learning, especially in human-robot collaboration. Although ProMP has been extended to multi-task situations inspired by the Gaussian mixture model, it still treats each task independently. ProMP ignores the common scenario that robots conduct adaptive switching of the collaborative tasks in order to align with the instantaneous change of human intention. To solve this problem, we proposed an alternate learning-based parameter estimation method and an empirical minimum variation-based decomposition strategy with projection points, combining with linear interpolation strategy for weights, based on a Gaussian mixture model framework. Alternate learning of weights and parameters in multi-task ProMP (MTProMP) allows the robot to obtain a smooth composite trajectory planning which crosses expected via points. Decomposition strategy reflects how the desired via point state is projected onto the individual ProMP component, rendering the minimum total sum of deviations between each projection point with the respective prior. Linear interpolation is used to adjust the weights among sequential via points automatically. The proposed method and strategy are successfully extended to multi-task interaction ProMPs (MTiProMP). With MTProMP and MTiProMP, the robot can be applied to multiple tasks in industrial factories and collaborate with the worker to switch from one task to another according to changing intentions of the human. Classical via points trajectory planning experiments and human-robot collaboration experiments are performed on the Sawyer robot. The results of experiments show that MTProMP and MTiProMP with the proposed method and strategy perform better.
引用
收藏
页码:133762 / 133773
页数:12
相关论文
共 50 条
  • [21] Natural Grasp Intention Recognition Based on Gaze in Human-Robot Interaction
    Yang, Bo
    Huang, Jian
    Chen, Xinxing
    Li, Xiaolong
    Hasegawa, Yasuhisa
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2023, 27 (04) : 2059 - 2070
  • [22] Human-robot interaction review and challenges on task planning and programming
    Tsarouchi, Panagiota
    Makris, Sotiris
    Chryssolouris, George
    INTERNATIONAL JOURNAL OF COMPUTER INTEGRATED MANUFACTURING, 2016, 29 (08) : 916 - 931
  • [23] Human-Robot Interaction Review: Challenges and Solutions for Modern Industrial Environments
    Rodriguez-Guerra, Diego
    Sorrosal, Gorka
    Cabanes, Itziar
    Calleja, Carlos
    IEEE ACCESS, 2021, 9 : 108557 - 108578
  • [24] Use of Interaction Design Methodologies for Human-Robot Collaboration in Industrial Scenarios
    Prati, Elisa
    Villani, Valeria
    Grandi, Fabio
    Peruzzini, Margherita
    Sabattini, Lorenzo
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2022, 19 (04) : 3126 - 3138
  • [25] Human-Robot Interaction-Based Intention Sharing of Assistant Robot for Elderly People
    Yang, Jeong-Yean
    Kwon, Oh-Hun
    Lim, Chan-Soon
    Kwon, Dong-Soo
    INTELLIGENT AUTONOMOUS SYSTEMS 12 , VOL 2, 2013, 194 : 283 - +
  • [26] Affordance-Based Human-Robot Interaction With Reinforcement Learning
    Munguia-Galeano, Francisco
    Veeramani, Satheeshkumar
    Hernandez, Juan David
    Wen, Qingmeng
    Ji, Ze
    IEEE ACCESS, 2023, 11 : 31282 - 31292
  • [27] A Vision-Based Measure of Environmental Effects on Inferring Human Intention During Human Robot Interaction
    Wei, Dong
    Chen, Lipeng
    Zhao, Longfei
    Zhou, Hua
    Huang, Bidan
    IEEE SENSORS JOURNAL, 2022, 22 (05) : 4246 - 4256
  • [28] Multi-modal referring expressions in human-human task descriptions and their implications for human-robot interaction
    Gross, Stephanie
    Krenn, Brigitte
    Scheutz, Matthias
    INTERACTION STUDIES, 2016, 17 (02) : 180 - 210
  • [29] GazeEMD: Detecting Visual Intention in Gaze-Based Human-Robot Interaction
    Shi, Lei
    Copot, Cosmin
    Vanlanduit, Steve
    ROBOTICS, 2021, 10 (02)
  • [30] Analysing Action and Intention Recognition in Human-Robot Interaction with ANEMONE
    Alenljung, Beatrice
    Lindblom, Jessica
    HUMAN-COMPUTER INTERACTION: INTERACTION TECHNIQUES AND NOVEL APPLICATIONS, HCII 2021, PT II, 2021, 12763 : 181 - 200