RAMP: A Benchmark for Evaluating Robotic Assembly Manipulation and Planning

被引:3
作者
Collins J. [1 ]
Robson M. [2 ]
Yamada J. [1 ]
Sridharan M. [3 ]
Janik K. [4 ]
Posner I. [1 ]
机构
[1] University of Oxford, Applied AI Lab, Oxford Robotics Institute, Oxford
[2] University of Birmingham, Birmingham
[3] School of Computer Science, University of Birmingham, Intelligent Robotics Lab, Birmingham
[4] The Manufacturing Technology Centre, Coventry
基金
英国工程与自然科学研究理事会;
关键词
assembly; manipulation planning; Performance evaluation and benchmarking; task and motion planning;
D O I
10.1109/LRA.2023.3330611
中图分类号
学科分类号
摘要
We introduce RAMP, an open-source robotics benchmark inspired by real-world industrial assembly tasks. RAMP consists of beams that a robot must assemble into specified goal configurations using pegs as fasteners. As such, it assesses planning and execution capabilities, and poses challenges in perception, reasoning, manipulation, diagnostics, fault recovery, and goal parsing. RAMP has been designed to be accessible and extensible. Parts are either 3D printed or otherwise constructed from materials that are readily obtainable. The design of parts and detailed instructions are publicly available. In order to broaden community engagement, RAMP incorporates fixtures such as April Tags which enable researchers to focus on individual sub-tasks of the assembly challenge if desired. We provide a full digital twin as well as rudimentary baselines to enable rapid progress. Our vision is for RAMP to form the substrate for a community-driven endeavour that evolves as capability matures. © 2016 IEEE.
引用
收藏
页码:9 / 16
页数:7
相关论文
共 45 条
[1]  
Manyika J., Et al., A Future That Works: AI, Automation, Employment, and Productivity, pp. 1-135, (2017)
[2]  
Smith R.E., Off-site and modular construction explained, Nat. Inst. Building Sci., (2016)
[3]  
Assaad R.H., El-Adaway I.H., Hastak M., Needy K.L., Quantification of the state of practice of offsite construction and related technologies: Current trends and future prospects, J. Construction Eng. Manage., 148, 7, (2022)
[4]  
Krizhevsky A., Sutskever I., Hinton G.E., ImageNet classification with deep convolutional neural networks, Proc. Adv. Neural Inf. Process. Syst., pp. 1097-1105, (2012)
[5]  
Geiger A., Lenz P., Urtasun R., Are we ready for autonomous driving? the KITTI vision benchmark suite, Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 3354-3361, (2012)
[6]  
Brockman G., Et al., OpenAI Gym, (2016)
[7]  
Nvidia Isaac Sim
[8]  
Calli B., Walsman A., Singh A., Srinivasa S., Abbeel P., Dollar A.M., Benchmarking in manipulation research: Using the Yale-CMU-Berkeley object and model set, IEEE Robot. Automat. Mag., 22, 3, pp. 36-52, (2015)
[9]  
Mnyusiwalla H., Et al., Abin-picking benchmark for systematic evaluation of robotic pick-and-place systems, IEEE Robot. Automat. Lett., 5, 2, pp. 1389-1396, (2020)
[10]  
Garcia-Camacho I., Et al., Benchmarking bimanual cloth manipulation, IEEE Robot. Automat. Lett., 5, 2, pp. 1111-1118, (2020)