Real-time grasping strategies using event camera

被引:24
作者
Huang, Xiaoqian [1 ]
Halwani, Mohamad [1 ]
Muthusamy, Rajkumar [2 ]
Ayyad, Abdulla [1 ,3 ]
Swart, Dewald [4 ]
Seneviratne, Lakmal [1 ]
Gan, Dongming [5 ]
Zweiri, Yahya [1 ,6 ]
机构
[1] Khalifa Univ, Khalifa Univ Ctr Autonomous Robot Syst KUCARS, Abu Dhabi, U Arab Emirates
[2] Dubai Future Labs, Dubai, U Arab Emirates
[3] Khalifa Univ Sci & Technol, Aerosp Res & Innovat Ctr ARIC, Abu Dhabi, U Arab Emirates
[4] Strata Mfg PJSC, Res & Dev, Al Ain, U Arab Emirates
[5] Purdue Univ, Sch Engn Technol, W Lafayette, IN 47907 USA
[6] Khalifa Univ, Dept Aerosp Engn, Abu Dhabi, U Arab Emirates
关键词
Neuromorphic vision; Model-based grasping; Model-free grasping; Multi-object grasping; Event camera; VISION;
D O I
10.1007/s10845-021-01887-9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Robotic vision plays a key role for perceiving the environment in grasping applications. However, the conventional framed-based robotic vision, suffering from motion blur and low sampling rate, may not meet the automation needs of evolving industrial requirements. This paper, for the first time, proposes an event-based robotic grasping framework for multiple known and unknown objects in a cluttered scene. With advantages of microsecond-level sampling rate and no motion blur of event camera, the model-based and model-free approaches are developed for known and unknown objects' grasping respectively. The event-based multi-view approach is used to localize the objects in the scene in the model-based approach, and then point cloud processing is utilized to cluster and register the objects. The proposed model-free approach, on the other hand, utilizes the developed event-based object segmentation, visual servoing and grasp planning to localize, align to, and grasp the targeting object. Using a UR10 robot with an eye-in-hand neuromorphic camera and a Barrett hand gripper, the proposed approaches are experimentally validated with objects of different sizes. Furthermore, it demonstrates robustness and a significant advantage over grasping with a traditional frame-based camera in low-light conditions.
引用
收藏
页码:593 / 615
页数:23
相关论文
共 45 条
[1]  
[Anonymous], DAVIS 346
[2]  
[Anonymous], MULTIFINGERED PROGRA
[3]  
[Anonymous], UR10 TECHNICAL SPECI
[4]   Automated Object Manipulation Using Vision-Based Mobile Robotic System for Construction Applications [J].
Asadi, Khashayar ;
Haritsa, Varun R. ;
Han, Kevin ;
Ore, John-Paul .
JOURNAL OF COMPUTING IN CIVIL ENGINEERING, 2021, 35 (01)
[5]  
Barranco F, 2018, IEEE INT C INT ROBOT, P5764, DOI 10.1109/IROS.2018.8593380
[6]   Data-Driven Grasp Synthesis-A Survey [J].
Bohg, Jeannette ;
Morales, Antonio ;
Asfour, Tamim ;
Kragic, Danica .
IEEE TRANSACTIONS ON ROBOTICS, 2014, 30 (02) :289-309
[7]   YOLACT Real-time Instance Segmentation [J].
Bolya, Daniel ;
Zhou, Chong ;
Xiao, Fanyi ;
Lee, Yong Jae .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :9156-9165
[8]   Adaptive Convolution for Object Detection [J].
Chen, Chunlin ;
Ling, Qiang .
IEEE TRANSACTIONS ON MULTIMEDIA, 2019, 21 (12) :3205-3217
[9]  
Chen ZH, 2017, CHIN CONTR CONF, P11223, DOI 10.23919/ChiCC.2017.8029147
[10]   SUPPORT-VECTOR NETWORKS [J].
CORTES, C ;
VAPNIK, V .
MACHINE LEARNING, 1995, 20 (03) :273-297