Natural object manipulation using anthropomorphic robotic hand through deep reinforcement learning and deep grasping probability network

被引:0
作者
Edwin Valarezo Añazco
Patricio Rivera Lopez
Nahyeon Park
Jiheon Oh
Gahyeon Ryu
Mugahed A. Al-antari
Tae-Seong Kim
机构
[1] Kyung Hee University,Department of Biomedical Engineering, College of Electronics and Information
来源
Applied Intelligence | 2021年 / 51卷
关键词
Anthropomorphic robotic hand; Natural object grasping and relocation; Deep reinforcement learning; Human grasping hand poses; Deep grasping probability network; Natural policy gradient;
D O I
暂无
中图分类号
学科分类号
摘要
Human hands can perform complex manipulation of various objects. It is beneficial if anthropomorphic robotic hands can manipulate objects like human hands. However, it is still a challenge due to the high dimensionality and a lack of machine intelligence. In this work, we propose a novel framework based on Deep Reinforcement Learning (DRL) with Deep Grasping Probability Network (DGPN) to grasp and relocate various objects with an anthropomorphic robotic hand much like a human hand. DGPN is used to predict the probability of successful human-like natural grasping based on the priors of human grasping hand poses and object touch areas. Thus, our DRL with DGPN rewards natural grasping hand poses according to object geometry for successful human-like manipulation of objects. The proposed DRL with DGPN is evaluated by grasping and relocating five objects including apple, light bulb, cup, bottle, and can. The performance of our DRL with DGPN is compared with the standard DRL without DGPN. The results show that the standard DRL only achieves an average success rate of 22.60%, whereas our DRL with DGPN achieves 89.40% for the grasping and relocation tasks of the objects.
引用
收藏
页码:1041 / 1055
页数:14
相关论文
共 50 条
[31]   Flexible robotic cell scheduling with graph neural network based deep reinforcement learning [J].
Wang, Donghai ;
Liu, Shun ;
Zou, Jing ;
Qiao, Wenjun ;
Jin, Sun .
JOURNAL OF MANUFACTURING SYSTEMS, 2025, 78 :81-93
[32]   Collective Behavior Acquisition of Real Robotic Swarms using Deep Reinforcement Learning [J].
Yasuda, Toshiyuki ;
Ohkura, Kazuhiro .
2018 SECOND IEEE INTERNATIONAL CONFERENCE ON ROBOTIC COMPUTING (IRC), 2018, :179-180
[33]   Robotic inspection for autonomous crack segmentation and exploration using deep reinforcement learning [J].
Fan, Chun-Hao ;
Wu, Rih-Teng ;
Chang, Yung-, I .
AUTOMATION IN CONSTRUCTION, 2025, 176
[34]   Generating collective foraging behavior for robotic swarm using deep reinforcement learning [J].
Boyin Jin ;
Yupeng Liang ;
Ziyao Han ;
Kazuhiro Ohkura .
Artificial Life and Robotics, 2020, 25 :588-595
[35]   Generating collective foraging behavior for robotic swarm using deep reinforcement learning [J].
Jin, Boyin ;
Liang, Yupeng ;
Han, Ziyao ;
Ohkura, Kazuhiro .
ARTIFICIAL LIFE AND ROBOTICS, 2020, 25 (04) :588-595
[36]   Improving Network Availability with Low Network Construction Cost through Deep Reinforcement Learning [J].
Cho, Choong-hee ;
Lee, Hyunho ;
Kim, Taeyoung ;
Ryoo, Jeong-dong .
2019 INTERNATIONAL CONFERENCE ON INFORMATION AND COMMUNICATION TECHNOLOGIES FOR DISASTER MANAGEMENT (ICT-DM 2019), 2019,
[37]   Cooperative Object Transportation Using Curriculum-Based Deep Reinforcement Learning [J].
Eoh, Gyuho ;
Park, Tae-Hyoung .
SENSORS, 2021, 21 (14)
[38]   Learn multi-step object sorting tasks through deep reinforcement learning [J].
Bao, Jiatong ;
Zhang, Guoqing ;
Peng, Yi ;
Shao, Zhiyu ;
Song, Aiguo .
ROBOTICA, 2022, 40 (11) :3878-3894
[39]   Solving dynamic distribution network reconfiguration using deep reinforcement learning [J].
Ognjen B. Kundačina ;
Predrag M. Vidović ;
Milan R. Petković .
Electrical Engineering, 2022, 104 :1487-1501
[40]   Using Deep Reinforcement Learning to Automate Network Configurations for Internet of Vehicles [J].
Liu, Xing ;
Qian, Cheng ;
Yu, Wei ;
Griffith, David ;
Gopstein, Avi ;
Golmie, Nada .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (12) :15948-15958