Learn, detect, and grasp objects in real-world settings

被引:0
作者
Vincze, Markus [1 ]
Patten, Timothy [1 ]
Park, Kiru [1 ]
Bauer, Dominik [1 ]
机构
[1] Tech Univ Wien, Automatisierungs & Regelungstech Inst, Vienna, Austria
来源
ELEKTROTECHNIK UND INFORMATIONSTECHNIK | 2020年 / 137卷 / 06期
关键词
object detection; learning; recognition; scene understanding; grasping;
D O I
10.1007/s00502-020-00817-6
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Experts predict that future robot applications will require safe and predictable operation: robots will need to be able to explain what they are doing to be trusted. To reach this goal, they will need to perceive their environment and its object to better understand the world and the tasks they have to perform. This article gives an overview of present advances with the focus on options to learn, detect, and grasp objects. With the approach of colour and depth (RGB-D) cameras and the advances in AI and deep learning methods, robot vision has been pushed considerably over the last years. We summarise recent results for pose estimation of objects and work on verifying object poses using a digital twin and physics simulation. The idea is that any hypothesis from an object detector and pose estimator is verified leveraging on the continuous advances in deep learning approaches to create object hypotheses. We then show that the object poses are robust enough such that a mobile manipulator can approach the object and grasp it. We intend to indicate that it is now feasible to model, recognise and grasp many objects with good performance, though further work is needed for applications in industrial settings.
引用
收藏
页码:324 / 330
页数:7
相关论文
共 38 条
[1]   A Global Hypothesis Verification Framework for 3D Object Recognition in Clutter [J].
Aldoma, Aitor ;
Tombari, Federico ;
Di Stefano, Luigi ;
Vincze, Markus .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2016, 38 (07) :1383-1396
[2]  
[Anonymous], 2014, Deep fragment embeddings for bidirectional image sentence mapping
[3]  
Bateux Q, 2018, IEEE INT CONF ROBOT, P3307
[4]   VeREFINE: Integrating Object Pose Verification With Physics-Guided Iterative Refinement [J].
Bauer, Dominik ;
Patten, Timothy ;
Vincze, Markus .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2020, 5 (03) :4289-4296
[5]  
Bauer P, 2019, MED C CONTR AUTOMAT, P386, DOI [10.1109/MED.2019.8798518, 10.1109/med.2019.8798518]
[6]   Data-Driven Grasp Synthesis-A Survey [J].
Bohg, Jeannette ;
Morales, Antonio ;
Asfour, Tamim ;
Kragic, Danica .
IEEE TRANSACTIONS ON ROBOTICS, 2014, 30 (02) :289-309
[7]   Benchmarking in Manipulation Research Using the Yale-CMU-Berkeley Object and Model Set [J].
Calli, Berk ;
Walsman, Aaron ;
Singh, Arjun ;
Srinivasa, Siddhartha ;
Abbeel, Pieter ;
Dollar, Aaron M. .
IEEE ROBOTICS & AUTOMATION MAGAZINE, 2015, 22 (03) :36-52
[8]  
Detry R, 2013, IEEE INT C INT ROBOT, P1720, DOI 10.1109/IROS.2013.6696581
[9]   RGBD Datasets: Past, Present and Future [J].
Firman, Michael .
PROCEEDINGS OF 29TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, (CVPRW 2016), 2016, :661-673
[10]   Hobbit, a care robot supporting independent living at home: First prototype and lessons learned [J].
Fischinger, David ;
Einramhof, Peter ;
Papoutsakis, Konstantinos ;
Wohlkinger, Walter ;
Mayer, Peter ;
Panek, Paul ;
Hofmann, Stefan ;
Koertner, Tobias ;
Weiss, Astrid ;
Argyros, Antonis ;
Vincze, Markus .
ROBOTICS AND AUTONOMOUS SYSTEMS, 2016, 75 :60-78