An improved approach of task-parameterized learning from demonstrations for cobots in dynamic manufacturing

被引:14
作者
El Zaatari, Shirine [1 ]
Wang, Yuqi [2 ]
Hu, Yudie [2 ]
Li, Weidong [1 ,2 ]
机构
[1] Coventry Univ, Fac Engn Environm & Comp, Coventry, W Midlands, England
[2] Wuhan Univ Technol, Sch Logist Engn, Wuhan, Peoples R China
基金
中国国家自然科学基金;
关键词
Learning from demonstration; Reinforcement learning; Collaborative robots;
D O I
10.1007/s10845-021-01743-w
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Task-Parameterized Learning from Demonstrations (TP-LfD) is an intelligent intuitive approach to support collaborative robots (cobots) for various industrial applications. Using TP-LfD, human's demonstrated paths can be learnt by a cobot for reproducing new paths for the cobot to move along in dynamic situations intelligently. One of the challenges to applying TP-LfD in industrial scenarios is how to identify and optimize critical task parameters of TP-LfD, i.e., frames in demonstrations. To overcome the challenge and enhance the performance of TP-LfD in complex manufacturing applications, in this paper, an improved TP-LfD approach is presented. In the approach, frames in demonstrations are autonomously chosen from a pool of generic visual features. To strengthen computational convergence, a statistical algorithm and a reinforcement learning algorithm are designed to eliminate redundant frames and irrelevant frames respectively. Meanwhile, a B-Spline cut-in algorithm is integrated in the improved TP-LfD approach to enhance the path reproducing process in dynamic manufacturing situations. Case studies were conducted to validate the improved TP-LfD approach and to showcase the advantage of the approach. Owing to the robust and generic capabilities, the improved TP-LfD approach enables teaching a cobot to behavior in a more intuitive and intelligent means to support dynamic manufacturing applications.
引用
收藏
页码:1503 / 1519
页数:17
相关论文
共 32 条
[1]  
Alizadeh T, 2018, P 2018 IEEE INT C ME
[2]  
Alizadeh T, 2016, P 2016 IEEE SICE INT
[3]   Speeded-Up Robust Features (SURF) [J].
Bay, Herbert ;
Ess, Andreas ;
Tuytelaars, Tinne ;
Van Gool, Luc .
COMPUTER VISION AND IMAGE UNDERSTANDING, 2008, 110 (03) :346-359
[4]   SkeleMotion: A New Representation of Skeleton Joint Sequences Based on Motion Information for 3D Action Recognition [J].
Caetano, Carlos ;
Sena, Jessica ;
Bremond, Francois ;
dos Santos, Jefersson A. ;
Schwartz, William Robson .
2019 16TH IEEE INTERNATIONAL CONFERENCE ON ADVANCED VIDEO AND SIGNAL BASED SURVEILLANCE (AVSS), 2019,
[5]   A tutorial on task-parameterized movement learning and retrieval [J].
Calinon, Sylvain .
INTELLIGENT SERVICE ROBOTICS, 2016, 9 (01) :1-29
[6]   Supervised feature learning by adversarial autoencoder approach for object classification in dual X-ray image of luggage [J].
Chouai, Mohamed ;
Merah, Mostefa ;
Sancho-Gomez, Jose-Luis ;
Mimi, Malika .
JOURNAL OF INTELLIGENT MANUFACTURING, 2020, 31 (05) :1101-1112
[7]   Trajectory generation for robotic assembly operations using learning by demonstration [J].
Duque, David A. ;
Prieto, Flavio A. ;
Hoyos, Jose G. .
ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING, 2019, 57 :292-302
[8]   Cobot programming for collaborative industrial tasks: An overview [J].
El Zaatari, Shirine ;
Marei, Mohamed ;
Li, Weidong ;
Usman, Zahid .
ROBOTICS AND AUTONOMOUS SYSTEMS, 2019, 116 :162-180
[9]   Robot learning from demonstrations: Emulation learning in environments with moving obstacles [J].
Ghalamzan, Amir M. E. ;
Ragaglia, Matteo .
ROBOTICS AND AUTONOMOUS SYSTEMS, 2018, 101 :45-56
[10]  
Girgin H, 2020, P IEEE INT C ROB INT