A Motion Planning Method for Visual Servoing Using Deep Reinforcement Learning in Autonomous Robotic Assembly

被引:8
作者
Liu, Zhenyu [1 ,2 ]
Wang, Ke [1 ,2 ]
Liu, Daxin [1 ,2 ]
Wang, Qide [1 ,2 ]
Tan, Jianrong [1 ,2 ]
机构
[1] Zhejiang Univ, State Key Lab Comp Aided Design & Comp CAD&CG, Hangzhou 310027, Peoples R China
[2] Zhejiang Univ, Engn Res Ctr Design Engn & Digital Twin Zhejiang P, Hangzhou 310027, Peoples R China
基金
中国国家自然科学基金;
关键词
Deep reinforcement learning (DRL); motion planning; robotic assembly; visual servoing (VS); TRACKING; POSITION;
D O I
10.1109/TMECH.2023.3275854
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Assembly positioning by visual servoing (VS) is a basis for autonomous robotic assembly. In practice, VS control suffers potential stability and convergence problems due to image and physical constraints, e.g., field of view constraints, image local minima, obstacle collisions, and occlusion. Therefore, this article proposes a novel deep reinforcement learning-based hybrid visual servoing (DRL-HVS) controller for motion planning of VS tasks. DRLHVS controller takes current observed image features and camera pose as inputs, and the core parameters of hybrid VS are dynamically optimized using a deep deterministic policy gradient (DDPG) algorithm to obtain an optimal motion scheme, considering image/physical constraints and robot motion performance. In addition, an adaptive exploration strategy is proposed to further improve the training efficiency by adaptively tuning the exploration noise parameters. In this way, the offline pretrained DRL-HVS controller in the virtual environment, where the DDPG actor-critic network is continuously optimized, can be quickly deployed to a real robot system for real-time control. Experiments based on an eye-in-hand VS system are conducted with a calibrated HIKVISION RGB camera mounted on the end-effector of a GSK-RB03A1 six degree-of-freedom (6-DoF) robot. Basic VS task experiments show that the proposed controller achieves better performance than the existing methods: the servoing time is 24% smaller than that of the five-dimensional VS method, a 100% success rate with the perturbed ranges of the initial position within 25 mm for translation and 20 degrees for rotation, and a 48% efficiency improvement. Moreover, a planetary gear component assembly process case study, where the robot aims to automatically put the gears on the gear shafts, is conducted to demonstrate the applicability of the proposed method in practice.
引用
收藏
页码:3513 / 3524
页数:12
相关论文
共 50 条
  • [21] Robotic assembly of timber joints using reinforcement learning
    Apolinarska, Aleksandra Anna
    Pacher, Matteo
    Li, Hui
    Cote, Nicholas
    Pastrana, Rafael
    Gramazio, Fabio
    Kohler, Matthias
    AUTOMATION IN CONSTRUCTION, 2021, 125
  • [22] Safe Reinforcement Learning With Stability Guarantee for Motion Planning of Autonomous Vehicles
    Zhang, Lixian
    Zhang, Ruixian
    Wu, Tong
    Weng, Rui
    Han, Minghao
    Zhao, Ye
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (12) : 5435 - 5444
  • [23] Reinforcement-Learning Based Robotic Assembly of Fractured Objects Using Visual and Tactile Information
    Song, Xinchao
    Lamb, Nikolas
    Banerjee, Sean
    Banerjee, Natasha Kholgade
    2023 9TH INTERNATIONAL CONFERENCE ON AUTOMATION, ROBOTICS AND APPLICATIONS, ICARA, 2023, : 170 - 174
  • [24] Robotic assembly of smartphone back shells with eye-in-hand visual servoing
    Chang, Wen-Chung
    ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING, 2018, 50 : 102 - 113
  • [25] Deep Learning Based Motion Planning For Autonomous Vehicle Using Spatiotemporal LSTM Network
    Bai, Zhengwei
    Cai, Baigen
    Wei ShangGuan
    Chai, Linguo
    2018 CHINESE AUTOMATION CONGRESS (CAC), 2018, : 1610 - 1614
  • [26] Comfort-Oriented Motion Planning for Automated Vehicles Using Deep Reinforcement Learning
    Rajesh, Nishant
    Zheng, Yanggu
    Shyrokau, Barys
    IEEE OPEN JOURNAL OF INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 4 : 348 - 359
  • [27] Robotic Camera Array Motion Planning for Multiple Human Face Tracking Based on Reinforcement Learning
    Wang, Pengyu
    Ma, Rui
    Yang, Zijiang
    Hao, Qi
    IEEE SENSORS JOURNAL, 2024, 24 (15) : 24649 - 24658
  • [28] Robotic assembly strategy via reinforcement learning based on force and visual information
    Ahn, Kuk-Hyun
    Na, Minwoo
    Song, Jae-Bok
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2023, 164
  • [29] Hierarchical Reinforcement Learning for Autonomous Decision Making and Motion Planning of Intelligent Vehicles
    Lu, Yang
    Xu, Xin
    Zhang, Xinglong
    Qian, Lilin
    Zhou, Xing
    IEEE ACCESS, 2020, 8 : 209776 - 209789
  • [30] Modular Deep Reinforcement Learning for Continuous Motion Planning With Temporal Logic
    Cai, Mingyu
    Hasanbeig, Mohammadhosein
    Xiao, Shaoping
    Abate, Alessandro
    Kan, Zhen
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (04) : 7973 - 7980