Simulated and Real Robotic Reach, Grasp, and Pick-and-Place Using Combined Reinforcement Learning and Traditional Controls

被引:10
|
作者
Lobbezoo, Andrew [1 ]
Kwon, Hyock-Ju [1 ]
机构
[1] Univ Waterloo, Dept Mech & Mechatron Engn, AI Mfg Lab, Waterloo, ON N2L 3G1, Canada
关键词
reinforcement learning; proximal policy optimization; soft actor-critic; simulation environment; robot operating system; robotic control; Franka Panda robot; pick-and-place; real-world robotics;
D O I
10.3390/robotics12010012
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
The majority of robots in factories today are operated with conventional control strategies that require individual programming on a task-by-task basis, with no margin for error. As an alternative to the rudimentary operation planning and task-programming techniques, machine learning has shown significant promise for higher-level task planning, with the development of reinforcement learning (RL)-based control strategies. This paper reviews the implementation of combined traditional and RL control for simulated and real environments to validate the RL approach for standard industrial tasks such as reach, grasp, and pick-and-place. The goal of this research is to bring intelligence to robotic control so that robotic operations can be completed without precisely defining the environment, constraints, and the action plan. The results from this approach provide optimistic preliminary data on the application of RL to real-world robotics.
引用
收藏
页数:19
相关论文
共 33 条