DefGoalNet: Contextual Goal Learning from Demonstrations for Deformable Object Manipulation

被引:0
|
作者
Thach, Bao [1 ,2 ]
Watts, Tanner [1 ,2 ]
Ho, Shing-Hei [1 ,2 ]
Hermans, Tucker [1 ,2 ,3 ]
Kuntz, Alan [1 ,2 ]
机构
[1] Univ Utah, Robot Ctr, Salt Lake City, UT 84112 USA
[2] Univ Utah, Kahlert Sch Comp, Salt Lake City, UT 84112 USA
[3] NVIDIA Corp, Seattle, WA USA
关键词
D O I
10.1109/ICRA57147.2024.10610109
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Shape servoing, a robotic task dedicated to controlling objects to desired goal shapes, is a promising approach to deformable object manipulation. An issue arises, however, with the reliance on the specification of a goal shape. This goal has been obtained either by a laborious domain knowledge engineering process or by manually manipulating the object into the desired shape and capturing the goal shape at that specific moment, both of which are impractical in various robotic applications. In this paper, we solve this problem by developing a novel neural network DefGoalNet, which learns deformable object goal shapes directly from a small number of human demonstrations. We demonstrate our method's effectiveness on various robotic tasks, both in simulation and on a physical robot. Notably, in the surgical retraction task, even when trained with as few as 10 demonstrations, our method achieves a median success percentage of nearly 90%. These results mark a substantial advancement in enabling shape servoing methods to bring deformable object manipulation closer to practical real-world applications.
引用
收藏
页码:3145 / 3152
页数:8
相关论文
共 50 条
  • [11] Construction of an Object Manipulation Database from Grasp Demonstrations
    Kent, David
    Chernova, Sonia
    2014 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2014), 2014, : 3347 - 3352
  • [12] Learning to plan for constrained manipulation from demonstrations
    Mike Phillips
    Victor Hwang
    Sachin Chitta
    Maxim Likhachev
    Autonomous Robots, 2016, 40 : 109 - 124
  • [13] Learning Manipulation Actions from a Few Demonstrations
    Abdo, Nichola
    Kretzschmar, Henrik
    Spinello, Luciano
    Stachniss, Cyrill
    2013 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2013, : 1268 - 1275
  • [14] Learning to plan for constrained manipulation from demonstrations
    Phillips, Mike
    Hwang, Victor
    Chitta, Sachin
    Likhachev, Maxim
    AUTONOMOUS ROBOTS, 2016, 40 (01) : 109 - 124
  • [15] ReForm: A Robot Learning Sandbox for Deformable Linear Object Manipulation
    Laezza, Rita
    Gieselmann, Robert
    Pokorny, Florian T.
    Karayiannidis, Yiannis
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 4717 - 4723
  • [16] Modeling, learning, perception, and control methods for deformable object manipulation
    Yin, Hang
    Varava, Anastasia
    Kragic, Danica
    SCIENCE ROBOTICS, 2021, 6 (54)
  • [17] Learning Foresightful Dense Visual Affordance for Deformable Object Manipulation
    Wu, Ruihai
    Ning, Chuanruo
    Dong, Hao
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 10913 - 10922
  • [18] Learning Manipulation Actions from Human Demonstrations
    Welschehold, Tim
    Dornhege, Christian
    Burgard, Wolfram
    2016 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2016), 2016, : 3772 - 3777
  • [19] Learning Coarsened Dynamic Graph Representations for Deformable Object Manipulation
    Marchetti, Giovanni Luca
    Moletta, Marco
    Tegner, Gustaf
    Shi, Peiyang
    Varava, Anastasiia
    Kravchenko, Alexander
    Kragic, Danica
    2021 20TH INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS (ICAR), 2021, : 955 - 960
  • [20] Manipulation of dynamically deformable object
    Tagawa, Kazuyoshi
    Hirota, Koichi
    Hirose, Michitaka
    SYMPOSIUM ON HAPTICS INTERFACES FOR VIRTUAL ENVIRONMENT AND TELEOPERATOR SYSTEMS 2008, PROCEEDINGS, 2008, : 327 - +