View-based teaching/playback for robotic manipulation

被引:0
作者
Maeda Y. [1 ]
Nakamura T. [2 ]
机构
[1] Faculty of Engineering, Yokohama National University, 79-5 Tokiwadai, Hodogaya-ku, 240-8501, Yokohama
[2] Nikon Corp., Tokyo
来源
ROBOMECH Journal | 2015年 / 2卷 / 01期
基金
日本学术振兴会;
关键词
Neural networks; Robot programming; View-based approach;
D O I
10.1186/s40648-014-0025-4
中图分类号
学科分类号
摘要
In this paper, we study a new method for robot programming: view-based teaching/playback. The motivation of its development is to achieve more robustness against changes of task conditions than conventional teaching/playback without losing its general versatility. For proof of concept, the method was implemented and tested on a virtual environment. The method is composed of two parts: teaching phase and playback phase. In the teaching phase, a human operator commands a robot to achieve a manipulation task. All the movements of the robot are recorded. All the images of the teaching scenes are also recorded by a camera. Then, a mapping from the recorded images to the movements is obtained as an artificial neural network. In the playback phase, the motion of the robot is determined by the output of the neural network calculated from scene images. We applied this view-based teaching/playback to pick-and-place and pushing by a robot hand with eight degrees of freedom in the virtual environment. Human demonstrated manipulation was successfully reproduced by the robot hand with our proposed method. Moreover, manipulation of the object from some initial positions that are not identical to those in the demonstrations was also successfully achieved with our method. © 2015, Maeda and Nakamura; licensee Springer.
引用
收藏
相关论文
共 22 条
[1]  
Matsumoto Y., Inaba M., Inoue H., View-based navigation using an omniview sequence in a corridor environment, Mach Vis Appl, 14, 2, pp. 121-128, (2003)
[2]  
Kuniyoshi Y., Inaba M., Inoue H., Learning by watching: Extracting reusable task knowledge from visual observation of human performance, IEEE Trans Robot Automation, 10, 6, pp. 799-822, (1994)
[3]  
Ikeuchi K., Suehiro T., Toward an assembly plan from observation Part I: Task recognition with polyhedral objects, IEEE Trans Robot Automation, 10, 3, pp. 368-385, (1994)
[4]  
Ogawara K., Takamatsu J., Kimura H., Ikeuchi K., Extraction of essential interactions through multiple observations of human demonstrations, IEEE Trans Ind Electron, 50, 4, pp. 667-675, (2003)
[5]  
Billard A., Calinon S., Dillman R., Schaal S., Robot programming by demonstration, Springer Handbook of Robotics. Chap. 59.2, pp. 1371-1394, (2008)
[6]  
Argall B.D., Chernovab S., Veloso M., Browning B., A survey of robot learning from demonstration, Robot Autonomous Syst, 57, 5, pp. 469-483, (2009)
[7]  
Zhao Q., Sun Z., Sun F., Zhu J., Appearance-based robot visual servo via a wavelet neural network, Int J Control Automation Syst, 6, 4, pp. 607-612, (2008)
[8]  
Zhang J., Knoll A., Schmidt R., A neuro-fuzzy control model for fine-positioning of manipulators, Robot Autonomous Syst, 32, 2-3, pp. 101-113, (2000)
[9]  
Shibata K., Iida M., Acquisition of box pushing by direct-vision-based reinforcement learning, Proc. of SICE Annual Conf., pp. 1378-1383, (2003)
[10]  
Kato M., Kobayashi Y., Hosoe S., Optimizing resolution for feature extraction in robotic motion learning, Proc. of IEEE Int. Conf. on Systems, Man and Cybernetics, pp. 1086-1091, (2005)