Learning Robotic Insertion Tasks From Human Demonstration

被引:8
作者
Wang, Kaimeng [1 ]
Zhao, Yu [1 ]
Sakuma, Ichiro [2 ]
机构
[1] FANUC Amer Corp, FANUC Adv Res Lab, Union City, CA 48326 USA
[2] Univ Tokyo, Dept Precis Engn, Tokyo 1138654, Japan
关键词
Learning from demonstration; human detection and tracking; transfer learning; imitation learning; HAND;
D O I
10.1109/LRA.2023.3300238
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Robotic insertion tasks often rely on delicate manual tuning due to the complexity of contact dynamics. In contrast, human is remarkably efficient in these tasks. In this context, Programming by Demonstration (PbD) has gained much traction since it shows the possibility for robots to learn new skills by observing human demonstration. However, existing PbD approaches suffer from the high cost of demonstration data collection, and low robustness to task uncertainties. In order to address these issues, we propose a new PbD-based learning framework for robotic insertion tasks. This framework includes a new demonstration data acquisition system, which replaces the expensive motion capture device with deep learning based hand pose tracking algorithm and a low-cost RGBD camera. A latent skill-guided reinforcement learning (RL) approach is also included for safe, efficient, and robust human-robot skill transfer, in which risky explorations are prevented by the reward function design and safety constraints in action space. A series of peg-hole-insertion experiments on a FANUC industrial robot are conducted to illustrate the effectiveness of the proposed approach.
引用
收藏
页码:5815 / 5822
页数:8
相关论文
共 40 条
[1]   Solving peg-in-hole tasks by human demonstration and exception strategies [J].
Abu-Dakka, Fares J. ;
Nemec, Bojan ;
Kramberger, Aljaz ;
Buch, Anders Glent ;
Kruger, Norbert ;
Ude, Ales .
INDUSTRIAL ROBOT-AN INTERNATIONAL JOURNAL, 2014, 41 (06) :575-584
[2]  
Arunachalam S. P., 2022, P IEEE INT C ROB AUT, P5954
[3]   Variable Compliance Control for Robotic Peg-in-Hole Assembly: A Deep-Reinforcement-Learning Approach [J].
Beltran-Hernandez, Cristian C. ;
Petit, Damien ;
Ramirez-Alpizar, Ixchel G. ;
Harada, Kensuke .
APPLIED SCIENCES-BASEL, 2020, 10 (19) :1-17
[4]  
CHAN TF, 1995, IEEE T ROBOTIC AUTOM, V11, P286, DOI 10.1109/70.370511
[5]  
Chen X., 2021, INT C LEARN REPR
[6]  
Edmonds M, 2017, IEEE INT C INT ROBOT, P3530, DOI 10.1109/IROS.2017.8206196
[7]  
Fujimoto S., 2019, Off-policy deep reinforcement learning without exploration, P2052
[8]  
Fujimoto S, 2021, ADV NEUR IN, V34
[9]   Dynamic Hand Gesture Recognition Based on 3D Hand Pose Estimation for Human-Robot Interaction [J].
Gao, Qing ;
Chen, Yongquan ;
Ju, Zhaojie ;
Liang, Yi .
IEEE SENSORS JOURNAL, 2022, 22 (18) :17421-17430
[10]   3D Hand Shape and Pose Estimation from a Single RGB Image [J].
Ge, Liuhao ;
Ren, Zhou ;
Li, Yuncheng ;
Xue, Zehao ;
Wang, Yingying ;
Cai, Jianfei ;
Yuan, Junsong .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :10825-10834