HANDAL: A Dataset of Real-World Manipulable Object Categories with Pose Annotations, Affordances, and Reconstructions

被引:0
作者
Guo, Andrew [1 ]
Wen, Bowen [1 ]
Yuan, Jianhe [1 ]
Tremblay, Jonathan [1 ]
Tyree, Stephen [1 ]
Smith, Jeffrey [1 ]
Birchfield, Stan [1 ]
机构
[1] NVIDIA, Santa Clara, CA 95051 USA
来源
2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) | 2023年
关键词
D O I
10.1109/IROS55552.2023.10341672
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present the HANDAL dataset for categorylevel object pose estimation and affordance prediction. Unlike previous datasets, ours is focused on robotics-ready manipulable objects that are of the proper size and shape for functional grasping by robot manipulators, such as pliers, utensils, and screwdrivers. Our annotation process is streamlined, requiring only a single off-the-shelf camera and semi-automated processing, allowing us to produce high-quality 3D annotations without crowd-sourcing. The dataset consists of 308k annotated image frames from 2.2k videos of 212 real-world objects in 17 categories. We focus on hardware and kitchen tool objects to facilitate research in practical scenarios in which a robot manipulator needs to interact with the environment beyond simple pushing or indiscriminate grasping. We outline the usefulness of our dataset for 6-DoF category-level pose+scale estimation and related tasks. We also provide 3D reconstructed meshes of all objects, and we outline some of the bottlenecks to be addressed for democratizing the collection of datasets like this one. Project website: https://nvlabs.github.io/HANDAL/
引用
收藏
页码:11428 / 11435
页数:8
相关论文
共 43 条
  • [1] Ahmadyan Adel, 2021, CVPR
  • [2] Caesar H, 2020, PROC CVPR IEEE, P11618, DOI 10.1109/CVPR42600.2020.01164
  • [3] Benchmarking in Manipulation Research Using the Yale-CMU-Berkeley Object and Model Set
    Calli, Berk
    Walsman, Aaron
    Singh, Arjun
    Srinivasa, Siddhartha
    Abbeel, Pieter
    Dollar, Aaron M.
    [J]. IEEE ROBOTICS & AUTOMATION MAGAZINE, 2015, 22 (03) : 36 - 52
  • [4] Chen Kai, 2021, ICCV
  • [5] Cheng H. K., 2022, ECCV
  • [6] Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
  • [7] Deng X., 2022, RAL
  • [8] Fu Y., 2022, NEURIPS
  • [9] Gao K., 2022, IROS
  • [10] Gao K., 2021, RSS