One-Shot Imitation Learning With Graph Neural Networks for Pick-and-Place Manipulation Tasks

被引:1
作者
Di Felice, Francesco [1 ]
D'Avella, Salvatore [1 ]
Remus, Alberto [1 ]
Tripicchio, Paolo [1 ]
Avizzano, Carlo Alberto [1 ]
机构
[1] Scuola Super Sant Anna, Mech Intelligence Inst, Dept Excellence Robot & AI, I-56127 Pisa, Italy
关键词
Learning from demonstration; imitation learning; task and motion planning;
D O I
10.1109/LRA.2023.3301234
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
The proposed work presents a framework based on Graph Neural Networks (GNN) that abstracts the task to be executed and directly allows the robot to learn task-specific rules from synthetic demonstrations given through imitation learning. A graph representation of the state space is considered to encode the task-relevant entities as nodes for a Pick-and-Place task declined at different levels of difficulty. During training, the GNN-based policy learns the underlying rules of the manipulation task focusing on the structural relevance and the type of objects and goals, relying on an external primitive to move the robot to accomplish the task. The GNN-policy has been trained as a node-classification approach by looking at the different configurations of the objects and goals present in the scene, learning the association between them with respect to their type for the Pick-and-Place task. The experimental results show a high generalization capability of the proposed model in terms of the number, positions, height distributions, and even configurations of the objects/goals. Thanks to the generalization, only a single image of the desired goal configuration is required at inference time.
引用
收藏
页码:5926 / 5933
页数:8
相关论文
共 30 条
  • [1] ROS-Industrial based robotic cell for Industry 4.0: Eye-in-hand stereo camera and visual servoing for flexible, fast, and accurate picking and hooking in the line
    D'Avella, Salvatore
    Avizzano, Carlo Alberto
    Tripiccho, Paolo
    [J]. ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING, 2023, 80
  • [2] Dasari S., 2020, ARXIV
  • [3] Visual Manipulation Relationship Detection based on Gated Graph Neural Network for Robotic Grasping
    Ding, Mengyuan
    Liu, Yaxin
    Yang, Chenjie
    Lan, Xuguang
    [J]. 2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 1404 - 1410
  • [4] Duan Y, 2017, ADV NEUR IN, V30
  • [5] Finn C., 2017, C ROBOT LEARNING, P357
  • [6] Garrett C. R., 2016, PROC INT J ROBOT RES, V37, P104
  • [7] Integrated Task and Motion Planning
    Garrett, Caelan Reed
    Chitnis, Rohan
    Holladay, Rachel
    Kim, Beomjoon
    Silver, Tom
    Kaelbling, Leslie Pack
    Lozano-Perez, Tomas
    [J]. ANNUAL REVIEW OF CONTROL, ROBOTICS, AND AUTONOMOUS SYSTEMS, VOL 4, 2021, 2021, 4 : 265 - 293
  • [8] Hamilton WL, 2017, ADV NEUR IN, V30
  • [9] Huang DA, 2019, IEEE INT C INT ROBOT, P2635, DOI [10.1109/IROS40897.2019.8967761, 10.1109/iros40897.2019.8967761]
  • [10] Neural Task Graphs: Generalizing to Unseen Tasks from a Single Video Demonstration
    Huang, De-An
    Nair, Suraj
    Xu, Danfei
    Zhu, Yuke
    Garg, Animesh
    Li Fei-Fei
    Savarese, Silvio
    Niebles, Juan Carlos
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 8557 - 8566