A trajectory and force dual-incremental robot skill learning and generalization framework using improved dynamical movement primitives and adaptive neural network control

被引:12
作者
Lu, Zhenyu [1 ]
Wang, Ning [1 ]
Li, Qinchuan [2 ]
Yang, Chenguang [1 ]
机构
[1] Univ West England, Bristol Robot Lab, Bristol BS16 1QY, England
[2] Zhejiang Sci Tech Univ, Sch Mech Engn, Hangzhou 310018, Zhejiang, Peoples R China
基金
欧盟地平线“2020”; 英国工程与自然科学研究理事会;
关键词
Incremental skill learning and; generalization; Learning from demonstration; Dynamic movement primitive (DMP); Adaptive neural network (NN) control; Multiple stylistic skill generalization;
D O I
10.1016/j.neucom.2022.11.076
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Due to changes in the environment and errors that occurred during skill initialization, the robot's oper-ational skills should be modified to adapt to new tasks. As such, skills learned by the methods with fixed features, such as the classical Dynamical Movement Primitive (DMP), are difficult to use when the using cases are significantly different from the demonstrations. In this work, we propose an incremental robot skill learning and generalization framework including an incremental DMP (IDMP) for robot trajectory learning and an adaptive neural network (NN) control method, which are incrementally updated to enable robots to adapt to new cases. IDMP uses multi-mapping feature vectors to rebuild the forcing function of DMP, which are extended based on the original feature vector. In order to maintain the orig-inal skills and represent skill changes in a new task, the new feature vector consists of three parts with different usages. Therefore, the trajectories are gradually changed by expanding the feature and weight vectors, and all transition states are also easily recovered. Then, an adaptive NN controller with perfor-mance constraints is proposed to compensate dynamics errors and changed trajectories after using the IDMP. The new controller is also incrementally updated and can accumulate and reuse the learned knowledge to improve the learning efficiency. Compared with other methods, the proposed framework achieves higher tracking accuracy, realizes incremental skill learning and modification, achieves multiple stylistic skills, and is used for obstacle avoidance with different heights, which are verified in three com-parative experiments. (c) 2022 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
引用
收藏
页码:146 / 159
页数:14
相关论文
共 38 条
[1]  
Abu-Dakka FJ, 2020, IEEE INT CONF ROBOT, P4421, DOI [10.1109/ICRA40945.2020.9196952, 10.1109/icra40945.2020.9196952]
[2]  
Akbulut M., 2021, C ROBOT LEARNING, P1896
[3]   A survey of robot learning from demonstration [J].
Argall, Brenna D. ;
Chernova, Sonia ;
Veloso, Manuela ;
Browning, Brett .
ROBOTICS AND AUTONOMOUS SYSTEMS, 2009, 57 (05) :469-483
[4]   Gaussians on Riemannian Manifolds: Applications for Robot Learning and Adaptive Control [J].
Calinon, Sylvain .
IEEE ROBOTICS & AUTOMATION MAGAZINE, 2020, 27 (02) :33-45
[5]   Broad Learning System: An Effective and Efficient Incremental Learning System Without the Need for Deep Architecture [J].
Chen, C. L. Philip ;
Liu, Zhulin .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (01) :10-24
[6]  
Deni M., 2016, Robot Control, V1, P17
[7]   Coupling Movement Primitives: Interaction With the Environment and Bimanual Tasks [J].
Gams, Andrej ;
Nemec, Bojan ;
Ijspeert, Auke Jan ;
Ude, Ales .
IEEE TRANSACTIONS ON ROBOTICS, 2014, 30 (04) :816-830
[8]  
Hoffmann H, 2009, IEEE INT CONF ROBOT, P1534
[9]   Motor Learning and Generalization Using Broad Learning Adaptive Neural Control [J].
Huang, Haohui ;
Zhang, Tong ;
Yang, Chenguang ;
Chen, C. L. Philip .
IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2020, 67 (10) :8608-8617
[10]   Learning Physical Human-Robot Interaction With Coupled Cooperative Primitives for a Lower Exoskeleton [J].
Huang, Rui ;
Cheng, Hong ;
Qiu, Jing ;
Zhang, Jianwei .
IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2019, 16 (04) :1566-1574