An Affordance Keypoint Detection Network for Robot Manipulation

被引:19
|
作者
Xu, Ruinian [1 ]
Chu, Fu-Jen [1 ]
Tang, Chao [1 ]
Liu, Weiyu [1 ]
Vela, Patricio A. [1 ]
机构
[1] Georgia Inst Technol, Inst Robot & Intelligent Machines, Atlanta, GA 30318 USA
来源
IEEE ROBOTICS AND AUTOMATION LETTERS | 2021年 / 6卷 / 02期
基金
美国国家科学基金会;
关键词
Deep learning in grasping and manipulation; perception for grasping and manipulation; RGB-D perception;
D O I
10.1109/LRA.2021.3062560
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
This letter investigates the addition of keypoint detections to a deep network affordance segmentation pipeline. The intent is to better interpret the functionality of object parts from a manipulation perspective. While affordance segmentation does provide label information about the potential use of object parts, it lacks predictions on the physical geometry that would support such use. The keypoints remedy the situation by providing structured predictions regarding position, direction, and extent. To support joint training of affordances and keypoints, a new dataset is created based on the UMD dataset. Called the UMD+GT affordance dataset, it emphasizes household objects and affordances. The dataset has a uniform representation for five keypoints that encodes information about where and how to manipulate the associated affordance. Visual processing benchmarking shows that the trained network, called AffKp, achieves the state-of-the-art performance on affordance segmentation and satisfactory result on keypoint detection. Manipulation experiments show more stable detection of the operating position for AffKp versus segmentation-only methods and the ability to infer object part pose and operating direction for task execution.
引用
收藏
页码:2870 / 2877
页数:8
相关论文
共 50 条
  • [31] An FPGA-Based High-Throughput Keypoint Detection Accelerator Using Convolutional Neural Network for Mobile Robot Applications
    Li, Jingyuan
    Liu, Ye
    Huang, Kun
    Zhou, Liang
    Chang, Liang
    Zhou, Jun
    2022 IEEE ASIA PACIFIC CONFERENCE ON POSTGRADUATE RESEARCH IN MICROELECTRONICS AND ELECTRONICS, PRIMEASIA, 2022, : 81 - 84
  • [32] A New Semantic Edge Aware Network for Object Affordance Detection
    Congcong Yin
    Qiuju Zhang
    Wenqiang Ren
    Journal of Intelligent & Robotic Systems, 2022, 104
  • [33] A4T: Hierarchical Affordance Detection for Transparent Objects Depth Reconstruction and Manipulation
    Jiang, Jiaqi
    Cao, Guanqun
    Thanh-Toan Do
    Luo, Shan
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (04) : 9826 - 9833
  • [34] KETO: Learning Keypoint Representations for Tool Manipulation
    Qin, Zengyi
    Fang, Kuan
    Zhu, Yuke
    Li Fei-Fei
    Savarese, Silvio
    2020 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2020, : 7278 - 7285
  • [35] Toward Affordance Detection and Ranking on Novel Objects for Real-World Robotic Manipulation
    Chu, Fu-Jen
    Xu, Ruinian
    Seguin, Landan
    Vela, Patricio A.
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2019, 4 (04): : 4070 - 4077
  • [36] Face Keypoint Detection Method Based on Blaze_ghost Network
    Yu, Ning
    Tian, Yongping
    Zhang, Xiaochuan
    Yin, Xiaofeng
    APPLIED SCIENCES-BASEL, 2023, 13 (18):
  • [37] Masked Loss Residual Convolutional Neural Network For Facial Keypoint Detection
    Xu, Junhong
    Wu, Shaoen
    Zhu, Shangyue
    Guo, Hanqing
    Wang, Honggang
    Yang, Qing
    10TH EAI INTERNATIONAL CONFERENCE ON MOBILE MULTIMEDIA COMMUNICATIONS (MOBIMEDIA 2017), 2017, : 234 - 239
  • [38] Pose Anchor: A Single-Stage Hand Keypoint Detection Network
    Li, Yuan
    Wang, Xinggang
    Liu, Wenyu
    Feng, Bin
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2020, 30 (07) : 2104 - 2113
  • [39] An Improved Deep Keypoint Detection Network for Space Targets Pose Estimation
    Xu, Junjie
    Song, Bin
    Yang, Xi
    Nan, Xiaoting
    REMOTE SENSING, 2020, 12 (23) : 1 - 21
  • [40] SEHRNet: A lightweight, high-resolution network for aircraft keypoint detection
    Zhang, Zhiqiang
    Zhang, Tianxiong
    Zhu, Xinping
    Li, Jiajun
    IET IMAGE PROCESSING, 2024, 18 (09) : 2476 - 2489