Visual servoing with deep learning and data augmentation for robotic manipulation

被引:0
作者
Liu J. [1 ]
Li Y. [1 ]
机构
[1] Beijing Institute of Technology, 5 South Zhongguancun Street, Haidian District, Beijing
来源
Journal of Advanced Computational Intelligence and Intelligent Informatics | 2020年 / 24卷 / 07期
基金
中国国家自然科学基金;
关键词
CNN; Data augmentation; Deep learning; Robotic manipulation; Visual servoing;
D O I
10.20965/JACIII.2020.P0953
中图分类号
学科分类号
摘要
We propose a visual servoing (VS) approach with deep learning to perform precise, robust, and real-time six degrees of freedom (6DOF) control of robotic manipulation to ease the extraction of image features and estimate the nonlinear relationship between the two-dimensional image space and the three-dimensional Cartesian space in traditional VS tasks. Owing to the superior learning capabilities of convolutional neural networks (CNNs), autonomous learning to select and extract image features from images and fitting the nonlinear mapping is achieved. A method for designing and generating a dataset from few or one image, by simulating the motion of an eye-in-hand robotic system is described herein. Therefore, network training requiring a large amount of data and difficult data collection occurring in actual situations can be solved. A dataset is utilized to train our VS convolutional neural network. Subsequently, a two-stream network is designed and the corresponding control approach is presented. This method converges robustly with the experimental results, in that the position error is less than 3 mm and the rotation error is less than 2.5◦ on average. © 2020 Fuji Technology Press. All rights reserved.
引用
收藏
页码:953 / 962
页数:9
相关论文
共 22 条
  • [1] Hutchinson S., Hager G. D., Corke P. I., A tutorial on visual servo control, IEEE Trans. on Robotics and Automation, 12, 5, pp. 651-670, (1996)
  • [2] Chaumette F., Hutchinson S., Visual Servo Control, Part I: Basic Approaches, IEEE Robotics & Automation Magazine, 13, 4, pp. 82-90, (2006)
  • [3] Chaumette F., Hutchinson S., Visual Servo Control, Part II: Advanced Approaches, IEEE Robotics & Automation Magazine, 14, 1, pp. 109-118, (2007)
  • [4] Xu D., A tutorial for monicular visual servoing, Acta Automation Sinica, 44, 10, pp. 1729-1746, (2018)
  • [5] Janabi-Sharifi F., Deng L., Wilson W. J., Comparison of Basic Visual Servoing Methods, IEEE/ASME Trans. on Mechatronics, 16, 5, pp. 967-983, (2011)
  • [6] Gans N. R., Hutchinson S. A., Corke P. I., Performance Tests for Visual Servo Control Systems, with Application to Partitioned Approaches to Visual Servo Control, The Int. J. of Robotics Research, 22, 10-11, pp. 955-981, (2003)
  • [7] Malis E., Chaumette F., Boudet S., 2 1/2 D visual servoing, IEEE Trans. on Robotics and Automation, 15, 2, pp. 238-250, (1999)
  • [8] Janabi-Sharifi F., Wilson W. J., Automatic selection of image features for visual servoing, IEEE Trans. on Robotics and Automation, 13, 6, pp. 890-903, (1997)
  • [9] Paulin M., Petersen H. G., Automatic feature planning for robust visual servoing, 2005 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 205-211, (2005)
  • [10] Pages J., Collewet C., Chaumette F., Salvi J., Optimizing plane-to-plane positioning tasks by image-based visual servoing and structured light, IEEE Trans. on Robotics, 22, 5, pp. 1000-1010, (2006)