A Human-Robot Collaboration Method Using a Pose Estimation Network for Robot Learning of Assembly Manipulation Trajectories From Demonstration Videos

被引:7
|
作者
Deng, Xinjian [1 ]
Liu, Jianhua [1 ]
Gong, Honghui [2 ]
Gong, Hao [1 ]
Huang, Jiayu [1 ]
机构
[1] Beijing Inst Technol, Sch Mech Engn, Beijing 100081, Peoples R China
[2] Shenyang Normal Univ, Software Coll, Shenyang 110034, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Robots; Trajectory; Task analysis; Videos; Service robots; Cameras; Sensors; Image processing; industrial robot; intelligent assembly; learning from demonstration; PREDICTION;
D O I
10.1109/TII.2022.3224966
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The wide application of industrial robots has greatly improved assembly efficiency and reliability. However, determining how to efficiently teach a robot to perform assembly manipulation trajectories from demonstration videos is a challenging issue. This article proposes a method integrating deep learning, image processing, and an iteration model to predict the real assembly manipulation trajectory of a human hand from a video without specific depth information. First, a pose estimation network, Keypoint-RCNN, is used to accurately estimate hand pose in the 2-D image of each frame in a video. Second, image processing is applied to map the 2-D hand pose estimated by the neural network with the real 3-D assembly space. An iteration model based on the trust region algorithm is proposed to solve for the quaternions and translation vectors of two frames. All the quaternions and translation vectors form the predicted assembly manipulation trajectories. Finally, a UR3 robot is used to imitate the assembly operation based on the predicted manipulation trajectories. The results show that the robot could successfully imitate various operations based on the predicted manipulation trajectories.
引用
收藏
页码:7160 / 7168
页数:9
相关论文
共 50 条
  • [21] Learning Human-Arm Reaching Motion Using a Wearable Device in Human-Robot Collaboration
    Kahanowich, Nadav D.
    Sintov, Avishai
    IEEE ACCESS, 2024, 12 : 24855 - 24865
  • [22] Robot Learning from Demonstration in Robotic Assembly: A Survey
    Zhu, Zuyuan
    Hu, Huosheng
    ROBOTICS, 2018, 7 (02):
  • [23] A Robot Learning from Demonstration Method Based on Neural Network and Teleoperation
    Liang, Ke
    Wang, Yupeng
    Pan, Lei
    Tang, Yu
    Li, Jing
    Lin, Yizhong
    Pan, Mingzhang
    ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING, 2024, 49 (02) : 1659 - 1672
  • [24] Learning Robot Manipulation from Cross-Morphology Demonstration
    Salhotra, Gautam
    Liu, I-Chun Arthur
    Sukhatme, Gaurav S.
    CONFERENCE ON ROBOT LEARNING, VOL 229, 2023, 229
  • [25] A Robot Learning from Demonstration Method Based on Neural Network and Teleoperation
    Ke Liang
    Yupeng Wang
    Lei Pan
    Yu Tang
    Jing Li
    Yizhong Lin
    Mingzhang Pan
    Arabian Journal for Science and Engineering, 2024, 49 : 1659 - 1672
  • [26] Hierarchical Intention Tracking for Robust Human-Robot Collaboration in Industrial Assembly Tasks
    Huang, Zhe
    Mun, Ye-Ji
    Li, Xiang
    Xie, Yiqing
    Zhong, Ninghan
    Liang, Weihang
    Geng, Junyi
    Chen, Tan
    Driggs-Campbell, Katherine
    2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2023), 2023, : 9821 - 9828
  • [27] Mastering the Working Sequence in Human-Robot Collaborative Assembly Based on Reinforcement Learning
    Yu, Tian
    Huang, Jing
    Chang, Qing
    IEEE ACCESS, 2020, 8 : 163868 - 163877
  • [28] Skill Learning Strategy Based on Dynamic Motion Primitives for Human-Robot Cooperative Manipulation
    Li, Junjun
    Li, Zhijun
    Li, Xinde
    Feng, Ying
    Hu, Yingbai
    Xu, Bugong
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2021, 13 (01) : 105 - 117
  • [29] Trends of Human-Robot Collaboration in Industry Contexts: Handover, Learning, and Metrics
    Castro, Afonso
    Silva, Filipe
    Santos, Vitor
    SENSORS, 2021, 21 (12)
  • [30] MoVEInt: Mixture of Variational Experts for Learning Human-Robot Interactions From Demonstrations
    Prasad, Vignesh
    Kshirsagar, Alap
    Koert, Dorothea
    Stock-Homburg, Ruth
    Peters, Jan
    Chalvatzaki, Georgia
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (07): : 6043 - 6050