Simulation-Aided Handover Prediction From Video Using Recurrent Image-to-Motion Networks

被引:7
作者
Mavsar, Matija [1 ]
Ridge, Barry [1 ,2 ]
Pahic, Rok [1 ]
Morimoto, Jun [2 ,3 ]
Ude, Ales [1 ,2 ]
机构
[1] Jozef Stefan Inst, Humanoid & Cognit Robot Lab, Dept Automat Biocybernet & Robot, Ljubljana 1000, Slovenia
[2] Adv Telecommun Res Inst Int, ATR Computat Neurosci Labs, Dept Brain Robot Interface, Kyoto 6190237, Japan
[3] Kyoto Univ, Grad Sch Informat, Kyoto 6068501, Japan
基金
日本学术振兴会; 日本科学技术振兴机构;
关键词
Trajectory; Robots; Handover; Task analysis; Receivers; Training; Computational modeling; Dynamic movement primitives (DMPs); handover; machine vision; recurrent neural networks (RNNs); robot learning; simulation; MOVEMENT PRIMITIVES; GENERATION; ADAPTATION;
D O I
10.1109/TNNLS.2022.3175720
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent advances in deep neural networks have opened up new possibilities for visuomotor robot learning. In the context of human-robot or robot-robot collaboration, such networks can be trained to predict future poses and this information can be used to improve the dynamics of cooperative tasks. This is important, both in terms of realizing various cooperative behaviors, and for ensuring safety. In this article, we propose a recurrent neural architecture, capable of transforming variable-length input motion videos into a set of parameters describing a robot trajectory, where predictions can be made after receiving only a few frames. A simulation environment is utilized to expand the training database and to improve generalization capability of the network. The resulting architecture demonstrates good accuracy when predicting handover trajectories, with models trained on synthetic and real data showing better performance than when trained on real or simulated data only. The computed trajectories enable the execution of handover tasks with uncalibrated robots, which was verified in an experiment with two real robots.
引用
收藏
页码:494 / 506
页数:13
相关论文
共 47 条
[1]  
[Anonymous], 2016, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, DOI DOI 10.1109/CVPR.2016.573
[2]  
Ba J, 2014, ACS SYM SER
[3]  
Bahl S, 2020, ADV NEUR IN, V33
[4]  
Ben Amor H, 2014, IEEE INT CONF ROBOT, P2831, DOI 10.1109/ICRA.2014.6907265
[5]  
Bousmalis K, 2018, IEEE INT CONF ROBOT, P4243
[6]  
Finn Chelsea, 2017, 2017 IEEE International Conference on Robotics and Automation (ICRA), P2786, DOI 10.1109/ICRA.2017.7989324
[7]   Recurrent Network Models for Human Dynamics [J].
Fragkiadaki, Katerina ;
Levine, Sergey ;
Felsen, Panna ;
Malik, Jitendra .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :4346-4354
[8]   LSTM: A Search Space Odyssey [J].
Greff, Klaus ;
Srivastava, Rupesh K. ;
Koutnik, Jan ;
Steunebrink, Bas R. ;
Schmidhuber, Juergen .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2017, 28 (10) :2222-2232
[9]  
He K., 2016, PROC CVPR IEEE, DOI [DOI 10.1109/CVPR.2016.90, 10.1109/CVPR.2016.90]
[10]  
Hochreiter S, 1997, NEURAL COMPUT, V9, P1735, DOI [10.1162/neco.1997.9.8.1735, 10.1007/978-3-642-24797-2, 10.1162/neco.1997.9.1.1]