Hand Pose-based Task Learning from Visual Observations with Semantic Skill Extraction

被引:0
|
作者
Qiu, Zeju [1 ]
Eiband, Thomas [1 ,2 ]
Li, Shile [1 ]
Lee, Dongheui [1 ]
机构
[1] Tech Univ Munich, Chair Human Ctr Assist Robot, Munich, Germany
[2] German Aerosp Ctr DLR, Inst Robot & Mechatron, Wessling, Germany
来源
2020 29TH IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN) | 2020年
关键词
MOTION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning from Demonstrations is a promising technique to transfer task knowledge from a user to a robot. We propose a framework for task programming by observing the human hand pose and object locations solely with a depth camera. By extracting skills from the demonstrations, we are able to represent what the robot has learned, generalize to unseen object locations and optimize the robotic execution instead of replaying a non-optimal behavior. A two-staged segmentation algorithm that employs skill template matching via Hidden Markov Models has been developed to extract motion primitives from the demonstration and gives them semantic meanings. In this way, the transfer of task knowledge has been improved from a simple replay of the demonstration towards a semantically annotated, optimized and generalized execution. We evaluated the extraction of a set of skills in simulation and prove that the task execution can be optimized by such means.
引用
收藏
页码:596 / 603
页数:8
相关论文
共 6 条
  • [1] A new pose-based representation for recognizing actions from multiple cameras
    Pehlivan, Selen
    Duygulu, Pinar
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2011, 115 (02) : 140 - 151
  • [2] A survey of deep learning methods and datasets for hand pose estimation from hand-object interaction images
    Woo, Taeyun
    Park, Wonjung
    Jeong, Woohyun
    Park, Jinah
    COMPUTERS & GRAPHICS-UK, 2023, 116 : 474 - 490
  • [3] A review on manipulation skill acquisition through teleoperation-based learning from demonstration
    Si, Weiyong
    Wang, Ning
    Yang, Chenguang
    COGNITIVE COMPUTATION AND SYSTEMS, 2021, 3 (01) : 1 - 16
  • [4] Deep-learning-based head pose estimation from a single RGB image and its application to medical CROM measurement
    Ritthipravat, Panrasee
    Chotikkakamthorn, Kittisak
    Lie, Wen-Nung
    Kusakunniran, Worapan
    Tuakta, Pimchanok
    Benjapornlert, Paitoon
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (31) : 77009 - 77028
  • [5] Broad-based visual benefits from training with an integrated perceptual-learning video game
    Deveau, Jenni
    Lovcik, Gary
    Seitz, Aaron R.
    VISION RESEARCH, 2014, 99 : 134 - 140
  • [6] Quantifying normal and parkinsonian gait features from home movies: Practical application of a deep learning-based 2D pose estimator
    Sato, Kenichiro
    Nagashima, Yu
    Mano, Tatsuo
    Iwata, Atsushi
    Toda, Tatsushi
    PLOS ONE, 2019, 14 (11):