XBG: End-to-End Imitation Learning for Autonomous Behaviour in Human-Robot Interaction and Collaboration

被引:1
作者
Cardenas-Perez, Carlos [1 ,2 ]
Romualdi, Giulio [1 ]
Elobaid, Mohamed [1 ]
Dafarra, Stefano [1 ]
L'Erario, Giuseppe [1 ,2 ]
Traversaro, Silvio [1 ]
Morerio, Pietro [3 ]
Del Bue, Alessio [3 ]
Pucci, Daniele [1 ,2 ]
机构
[1] Ist Italiano Tecnol, Artificial & Mech Intelligence, I-16163 Genoa, Italy
[2] Univ Manchester, Dept Comp Sci, Manchester M13 9PL, England
[3] Ist Italiano Tecnol, Pattern Anal & Comp Vis, I-16152 Genoa, Italy
关键词
Robots; Robot sensing systems; Humanoid robots; Human-robot interaction; Feature extraction; Imitation learning; Transformers; Legged locomotion; Solid modeling; Sensors; AI-enabled robotics; humanoid robot systems; imitation learning; learning from demonstration;
D O I
10.1109/LRA.2024.3495577
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
This letter presents XBG (eXteroceptive Behaviour Generation), a multimodal end-to-end Imitation Learning (IL) system for whole-body autonomous humanoid robots used in real-world Human-Robot Interaction (HRI) scenarios. The main contribution is an architecture for learning HRI behaviours using a data-driven approach. A diverse dataset is collected via teleoperation, covering multiple HRI scenarios, such as handshaking, handwaving, payload reception, walking, and walking with a payload. After synchronizing, filtering, and transforming the data, we show how to train the presented Deep Neural Networks (DNN), integrating exteroceptive and proprioceptive information to help the robot understand both its environment and its actions. The robot takes in sequences of images (RGB and depth) and joints state information to react accordingly. By fusing multimodal signals over time, the model enables autonomous capabilities in a robotic platform. The models are evaluated based on the success rates in the mentioned HRI scenarios and they are deployed on the ergoCub humanoid robot. XBG achieves success rates between 60% and 100% even when tested in unseen environments.
引用
收藏
页码:11617 / 11624
页数:8
相关论文
共 29 条
[1]  
Brohan A, 2023, Arxiv, DOI arXiv:2307.15818
[2]  
Brohan A, 2023, Arxiv, DOI arXiv:2212.06817
[3]   iCub3 avatar system: Enabling remote fully immersive embodiment of humanoid robots [J].
Dafarra, Stefano ;
Pattacini, Ugo ;
Romualdi, Giulio ;
Rapetti, Lorenzo ;
Grieco, Riccardo ;
Darvish, Kourosh ;
Milani, Gianluca ;
Valli, Enrico ;
Sorrentino, Ines ;
Viceconte, Paolo Maria ;
Scalzo, Alessandro ;
Traversaro, Silvio ;
Sartore, Carlotta ;
Elobaid, Mohamed ;
Guedelha, Nuno ;
Herron, Connor ;
Leonessa, Alexander ;
Draicchio, Francesco ;
Metta, Giorgio ;
Maggiali, Marco ;
Pucci, Daniele .
SCIENCE ROBOTICS, 2024, 9 (86)
[4]   Teleoperation of Humanoid Robots: A Survey [J].
Darvish, Kourosh ;
Penco, Luigi ;
Ramos, Joao ;
Cisneros, Rafael ;
Pratt, Jerry ;
Yoshida, Eiichi ;
Ivaldi, Serena ;
Pucci, Daniele .
IEEE TRANSACTIONS ON ROBOTICS, 2023, 39 (03) :1706-1727
[5]  
Darvish K, 2019, IEEE-RAS INT C HUMAN, P679, DOI [10.1109/humanoids43949.2019.9035059, 10.1109/Humanoids43949.2019.9035059]
[6]  
Dosovitskiy A., 2021, 9 INT C LEARN REPR I
[7]  
ergoCub, About us
[8]  
Finn Chelsea, 2017, P MACHINE LEARNING R, V78
[9]  
Fu Z., 2024, P C ROB LEARN
[10]  
ifeeltech, iFEEL