Analytical and deep learning approaches for solving the inverse kinematic problem of a high degrees of freedom robotic arm

被引:16
作者
Wagaa, Nesrine [1 ]
Kallel, Hichem [2 ]
Mellouli, Nedra [3 ,4 ]
机构
[1] Univ Carthage, Natl Inst Appl Sci & Technol INSAT, LARATSI Lab, Carthage 1080, Tunisia
[2] MedTech South Mediterranean Univ, Carthage 1053, Tunisia
[3] LIASD EA4383 Paris 8 Univ, Paris, France
[4] Leonard Vinci Pole Univ, Res Ctr Paris Def, Paris, France
关键词
Robotic arm; Analytical approach; Inverse kinematic; Neural networks; Hyper parameters; Numbers of Degrees of Freedom; MODEL; OPTIMIZATION; MANIPULATOR;
D O I
10.1016/j.engappai.2023.106301
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Inverse kinematics is the basis for controlling the motion of robotic manipulators. It defines the required joint variables for the robotic end-effector accurately reach the desired location. Due to the derivation difficulty, computation complexity, singularity problem, and redundancy, analytical Inverse kinematics solutions pose numerous challenges to the operation of many robotic arms, especially for a manipulator with a high degree of freedom. This paper develops different Deep Learning networks for solving the Inverse kinematics problem of six-Degrees of Freedom robotic manipulators. The implemented neural architectures are Artificial Neural Network, Convolutional Neural Network, Long-Short Term Memory, Gated Recurrent Unit, and Bidirectional Long-Short Term Memory. In this context, we associate the proposed results with a specific tuning of Deep Learning network hyper-parameters (number of hidden layers, learning rate, Loss function, optimization algorithm, number of epochs, etc.). The Bidirectional Long-Short Term Memory network outperformed all proposed architectures. To be close as possible to the experimental results, we have included two types of noise in the training data set to validate which of the five proposed neural networks is more efficient. Furthermore, in this study, we compare the performance of analytical and soft computing solutions in generating robots' trajectories. We include this scenario, focusing on the advantage of implementing neural networks in avoiding the singularity problem that can occur using the analytical approach. In addition, we used the RoboDK simulator to show simulation results with real-world meaning. The performance of Deep Learning models depends on the complexity of the posed problem. Moreover, the complexity of the Inverse Kinematics problem is related to the number of Degrees of Freedom. At the end of this work, we evaluate the influence of the complexity of robotic manipulators on the proposed Deep Learning networks' performance. The results show that the implemented Deep Learning mechanisms performed well in reaching the desired pose of the end-effector. The proposed inverse kinematics strategies apply to other manipulators with different numbers of Degrees of Freedom.
引用
收藏
页数:25
相关论文
共 77 条
  • [1] An optimized model using LSTM network for demand forecasting
    Abbasimehr, Hossein
    Shabani, Mostafa
    Yousefi, Mohsen
    [J]. COMPUTERS & INDUSTRIAL ENGINEERING, 2020, 143
  • [2] Aggogeri Francesco, 2022, Journal of Physics: Conference Series, V2234, DOI 10.1088/1742-6596/2234/1/012007
  • [3] Al-Khafaji H.M., 2017, AL KHWARIZMI ENG J, V13, P13
  • [4] Comparative Analysis of Recurrent Neural Network Architectures for Reservoir Inflow Forecasting
    Apaydin, Halit
    Feizi, Hajar
    Sattari, Mohammad Taghi
    Colak, Muslume Sevba
    Shamshirband, Shahaboddin
    Chau, Kwok-Wing
    [J]. WATER, 2020, 12 (05)
  • [5] Aravinddhakshan S., 2021, Journal of Physics: Conference Series, DOI 10.1088/1742-6596/1969/1/012010
  • [6] FABRIK: A fast, iterative solver for the Inverse Kinematics problem
    Aristidou, Andreas
    Lasenby, Joan
    [J]. GRAPHICAL MODELS, 2011, 73 : 243 - 260
  • [7] Arunadevi M., 2021, Proceedings of 5th International Conference on Computing Methodologies and Communication (ICCMC 2021), P807, DOI 10.1109/ICCMC51019.2021.9418318
  • [8] Comprehensive analysis of gradient-based hyperparameter optimization algorithms
    Bakhteev, O. Y.
    Strijov, V. V.
    [J]. ANNALS OF OPERATIONS RESEARCH, 2020, 289 (01) : 51 - 65
  • [9] LEARNING LONG-TERM DEPENDENCIES WITH GRADIENT DESCENT IS DIFFICULT
    BENGIO, Y
    SIMARD, P
    FRASCONI, P
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 1994, 5 (02): : 157 - 166
  • [10] Large-Scale Machine Learning with Stochastic Gradient Descent
    Bottou, Leon
    [J]. COMPSTAT'2010: 19TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL STATISTICS, 2010, : 177 - 186