Incrementally Stochastic and Accelerated Gradient Information Mixed Optimization for Manipulator Motion Planning

被引:1
作者
Feng, Yichang [1 ,2 ]
Wang, Jin [1 ,2 ]
Zhang, Haiyun [1 ,2 ]
Lu, Guodong [1 ,2 ]
机构
[1] Zhejiang Univ, State Key Lab Fluid Power & Mechatron Syst, Sch Mech Engn, Hangzhou 310027, Peoples R China
[2] Zhejiang Univ, Engn Res Ctr Design Engn & Digital Twin Zhejiang, Sch Mech Engn, Hangzhou 310027, Peoples R China
基金
中国国家自然科学基金;
关键词
Trajectory; Optimization; Planning; Costs; Convergence; Task analysis; Sampling methods; Constrained motion planning; collision avoid- ance; manipulation planning;
D O I
10.1109/LRA.2022.3191206
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
This paper introduces a novel motion planner, incrementally stochastic and accelerated gradient information mixed optimization (iSAGO), for robotic manipulators in a narrow workspace. Primarily, we propose the overall scheme of iSAGO informed by the mixed momenta for an efficient constrained optimization based on the penalty method. In the stochastic part, we generate the adaptive stochastic momenta via the random selection of sub-functionals based on the adaptive momentum (Adam) method to solve the body-obstacle stuck case. Due to the slow convergence of the stochastic part, we integrate the accelerated gradient descent (AGD) to improve the planning efficiency. Moreover, we adopt the Bayesian tree inference (BTI) to transform the whole trajectory optimization (SAGO) into an incremental sub-trajectory optimization (iSAGO), which improves the computation efficiency and success rate further. Finally, we tune the key parameters and benchmark iSAGO against the other 5 planners on LBR-iiwa on a bookshelf and AUBO-i5 on a storage shelf. The result shows the highest success rate and moderate solving efficiency of iSAGO.
引用
收藏
页码:9904 / 9911
页数:8
相关论文
共 35 条
  • [1] [Anonymous], 2013, ROBOT SCI SYST
  • [2] Bhardwaj Mohak, 2020, 2020 IEEE International Conference on Robotics and Automation (ICRA), P10598, DOI 10.1109/ICRA40945.2020.9197260
  • [3] Large-Scale Machine Learning with Stochastic Gradient Descent
    Bottou, Leon
    [J]. COMPSTAT'2010: 19TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL STATISTICS, 2010, : 177 - 186
  • [4] A trust region method based on interior point techniques for nonlinear programming
    Byrd, RH
    Gilbert, JC
    Nocedal, J
    [J]. MATHEMATICAL PROGRAMMING, 2000, 89 (01) : 149 - 185
  • [5] Dijkstra E.W., 1959, NUMER MATH, V1, P269, DOI DOI 10.1007/BF01386390
  • [6] Duchi J, 2011, J MACH LEARN RES, V12, P2121
  • [7] Gelman A, 2011, CH CRC HANDB MOD STA, P163
  • [8] Accelerated gradient methods for nonconvex nonlinear and stochastic programming
    Ghadimi, Saeed
    Lan, Guanghui
    [J]. MATHEMATICAL PROGRAMMING, 2016, 156 (1-2) : 59 - 99
  • [9] Goodfellow I, 2016, ADAPT COMPUT MACH LE, P1
  • [10] Hinton G., 2012, CITED, P14