Robotics Dexterous Grasping: The Methods Based on Point Cloud and Deep Learning

被引:36
作者
Duan, Haonan [1 ,2 ,3 ]
Wang, Peng [1 ,3 ,4 ]
Huang, Yayu [1 ,3 ]
Xu, Guangyun [1 ,3 ]
Wei, Wei [1 ,3 ]
Shen, Xiaofei [1 ,3 ]
机构
[1] Chinese Acad Sci, Inst Automat, State Key Lab Management & Control Complex Syst, Beijing, Peoples R China
[2] Univ Pittsburgh, Dept Informat Sci, Sch Comp & Informat, Pittsburgh, PA 15260 USA
[3] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing, Peoples R China
[4] Chinese Acad Sci, Ctr Excellence Brain Sci & Intelligence Technol, Shanghai, Peoples R China
基金
中国国家自然科学基金;
关键词
robotics; dexterous grasping; point cloud; deep learning; review; 3-DIMENSIONAL OBJECT RECOGNITION; NEURAL-NETWORKS; POSE ESTIMATION; MANIPULATION; MODEL; REGISTRATION; AFFORDANCES; STRATEGIES; DATASET; PICKING;
D O I
10.3389/fnbot.2021.658280
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Dexterous manipulation, especially dexterous grasping, is a primitive and crucial ability of robots that allows the implementation of performing human-like behaviors. Deploying the ability on robots enables them to assist and substitute human to accomplish more complex tasks in daily life and industrial production. A comprehensive review of the methods based on point cloud and deep learning for robotics dexterous grasping from three perspectives is given in this paper. As a new category schemes of the mainstream methods, the proposed generation-evaluation framework is the core concept of the classification. The other two classifications based on learning modes and applications are also briefly described afterwards. This review aims to afford a guideline for robotics dexterous grasping researchers and developers.
引用
收藏
页数:27
相关论文
共 230 条
[31]  
Dai J, 2016, PROCEEDINGS 2016 IEEE INTERNATIONAL CONFERENCE ON INDUSTRIAL TECHNOLOGY (ICIT), P1796, DOI 10.1109/ICIT.2016.7475036
[32]  
Deng XK, 2020, IEEE INT CONF ROBOT, P3665, DOI [10.1109/ICRA40945.2020.9196714, 10.1109/icra40945.2020.9196714]
[33]  
Depierre A, 2018, IEEE INT C INT ROBOT, P3511, DOI 10.1109/IROS.2018.8593950
[34]  
Dong ZK, 2019, IEEE INT C INT ROBOT, P1773, DOI [10.1109/IROS40897.2019.8967895, 10.1109/iros40897.2019.8967895]
[35]   Vision-based robotic grasping from object localization, object pose estimation to grasp estimation for parallel grippers: a review [J].
Du, Guoguang ;
Wang, Kai ;
Lian, Shiguo ;
Zhao, Kaiyong .
ARTIFICIAL INTELLIGENCE REVIEW, 2021, 54 (03) :1677-1734
[36]  
Dyrstad JS, 2018, 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO), P530, DOI 10.1109/ROBIO.2018.8664766
[37]  
Eppner C., 2019, INT S ROB RES ISRR
[38]   A Point Set Generation Network for 3D Object Reconstruction from a Single Image [J].
Fan, Haoqiang ;
Su, Hao ;
Guibas, Leonidas .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :2463-2471
[39]   GraspNet-1Billion: A Large-Scale Benchmark for General Object Grasping [J].
Fang, Hao-Shu ;
Wang, Chenxi ;
Gou, Minghao ;
Lu, Cewu .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :11441-11450
[40]   Vision-based grasp learning of an anthropomorphic hand-arm system in a synergy-based control framework [J].
Ficuciello, F. ;
Migliozzi, A. ;
Laudante, G. ;
Falco, P. ;
Siciliano, B. .
SCIENCE ROBOTICS, 2019, 4 (26)