An Affordance Keypoint Detection Network for Robot Manipulation

被引:19
|
作者
Xu, Ruinian [1 ]
Chu, Fu-Jen [1 ]
Tang, Chao [1 ]
Liu, Weiyu [1 ]
Vela, Patricio A. [1 ]
机构
[1] Georgia Inst Technol, Inst Robot & Intelligent Machines, Atlanta, GA 30318 USA
来源
IEEE ROBOTICS AND AUTOMATION LETTERS | 2021年 / 6卷 / 02期
基金
美国国家科学基金会;
关键词
Deep learning in grasping and manipulation; perception for grasping and manipulation; RGB-D perception;
D O I
10.1109/LRA.2021.3062560
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
This letter investigates the addition of keypoint detections to a deep network affordance segmentation pipeline. The intent is to better interpret the functionality of object parts from a manipulation perspective. While affordance segmentation does provide label information about the potential use of object parts, it lacks predictions on the physical geometry that would support such use. The keypoints remedy the situation by providing structured predictions regarding position, direction, and extent. To support joint training of affordances and keypoints, a new dataset is created based on the UMD dataset. Called the UMD+GT affordance dataset, it emphasizes household objects and affordances. The dataset has a uniform representation for five keypoints that encodes information about where and how to manipulate the associated affordance. Visual processing benchmarking shows that the trained network, called AffKp, achieves the state-of-the-art performance on affordance segmentation and satisfactory result on keypoint detection. Manipulation experiments show more stable detection of the operating position for AffKp versus segmentation-only methods and the ability to infer object part pose and operating direction for task execution.
引用
收藏
页码:2870 / 2877
页数:8
相关论文
共 50 条
  • [41] Improving Robot Grasping Plans with Affordance
    Su, Yen-Feng
    Liu, Alan
    Lu, Wei-Hong
    2017 INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS AND INTELLIGENT SYSTEMS (ARIS), 2017, : 7 - 12
  • [42] A brief review of affordance in robotic manipulation research
    Yamanobe, Natsuki
    Wan, Weiwei
    Ramirez-Alpizar, Ixchel G.
    Petit, Damien
    Tsuji, Tokuo
    Akizuki, Shuichi
    Hashimoto, Manabu
    Nagata, Kazuyuki
    Harada, Kensuke
    ADVANCED ROBOTICS, 2017, 31 (19-20) : 1086 - 1101
  • [43] Progressive Keypoint Detection With Dense Siamese Network for SAR Image Registration
    Xiang, Deliang
    Xu, Yihao
    Cheng, Jianda
    Xie, Yuzhen
    Guan, Dongdong
    IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS, 2023, 59 (05) : 5847 - 5858
  • [44] Joint keypoint detection and description network for color fundus image registration
    Rivas-Villar, David
    Hervella, Alvaro S.
    Rouco, Jose
    Novo, Jorge
    QUANTITATIVE IMAGING IN MEDICINE AND SURGERY, 2023, 13 (07) : 4540 - 4562
  • [45] SKP: Semantic 3D Keypoint Detection for Category-Level Robotic Manipulation
    Luo, Zhongzhen
    Xue, Wenjie
    Chae, Julia
    Fu, Guoyi
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (02): : 5437 - 5444
  • [46] Multilevel Attention Siamese Network for Keypoint Detection in Optical and SAR Images
    Zhang, Shaochen
    Fu, Zhitao
    Liu, Jun
    Su, Xin
    Luo, Bin
    Nie, Han
    Tang, Bo-Hui
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [47] Active Affordance Exploration for Robot Grasping
    Liu, Huaping
    Yuan, Yuan
    Deng, Yuhong
    Guo, Xiaofeng
    Wei, Yixuan
    Lu, Kai
    Fang, Bin
    Guo, Di
    Sun, Fuchun
    INTELLIGENT ROBOTICS AND APPLICATIONS, ICIRA 2019, PT V, 2019, 11744 : 426 - 438
  • [48] Visual affordance detection using an efficient attention convolutional neural network
    Gu, Qipeng
    Su, Jianhua
    Yuan, Lei
    NEUROCOMPUTING, 2021, 440 : 36 - 44
  • [49] Mobile Robot Manipulation using Pure Object Detection
    Griffin, Brent
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 561 - 571
  • [50] Visuo-Tactile Keypoint Correspondences for Object Manipulation
    Kim, Jeong-Jung
    Koh, Doo-Yeol
    Kim, Chang-Hyun
    2024 IEEE INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS, AIM 2024, 2024, : 399 - 403