TraKDis: A Transformer-Based Knowledge Distillation Approach for Visual Reinforcement Learning With Application to Cloth Manipulation

被引:2
作者
Chen, Wei [1 ]
Rojas, Nicolas [1 ]
机构
[1] Imperial Coll London, Dyson Sch Design Engn, REDS Lab, London SW7 2DB, England
关键词
Transformers; Visualization; Task analysis; Training; Robots; State estimation; Reinforcement learning; Deep learning in grasping and manipulation; transfer learning; knowledge distillation;
D O I
10.1109/LRA.2024.3358750
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Approaching robotic cloth manipulation using reinforcement learning based on visual feedback is appealing as robot perception and control can be learned simultaneously. However, major challenges result due to the intricate dynamics of cloth and the high dimensionality of the corresponding states, what shadows the practicality of the idea. To tackle these issues, we propose TraKDis, a novel Transformer-based Knowledge Distillation approach that decomposes the visual reinforcement learning problem into two distinct stages. In the first stage, a privileged agent is trained, which possesses complete knowledge of the cloth state information. This privileged agent acts as a teacher, providing valuable guidance and training signals for subsequent stages. The second stage involves a knowledge distillation procedure, where the knowledge acquired by the privileged agent is transferred to a vision-based agent by leveraging pre-trained state estimation and weight initialization. TraKDis demonstrates better performance when compared to state-of-the-art RL techniques, showing a higher performance of 21.9%, 13.8%, and 8.3% in cloth folding tasks in simulation. Furthermore, to validate robustness, we evaluate the agent in a noisy environment; the results indicate its ability to handle and adapt to environmental uncertainties effectively. Real robot experiments are also conducted to showcase the efficiency of our method in real-world scenarios.
引用
收藏
页码:2455 / 2462
页数:8
相关论文
共 38 条
[1]   Effective grasping enables successful robot-assisted dressing [J].
Borras, Julia .
SCIENCE ROBOTICS, 2022, 7 (65)
[2]   A Grasping-Centered Analysis for Cloth Manipulation [J].
Borras, Julia ;
Alenya, Guillem ;
Torras, Carme .
IEEE TRANSACTIONS ON ROBOTICS, 2020, 36 (03) :924-936
[3]  
Chen L., 2021, P ADV NEUR INF PROC
[4]   Learning to Grasp Clothing Structural Regions for Garment Manipulation Tasks [J].
Chen, Wei ;
Lee, Dongmyoung ;
Chappell, Digby ;
Rojas, Nicolas .
2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, IROS, 2023, :4889-4895
[5]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[6]   Learning to View: Decision Transformers for Active Object Detection [J].
Ding, Wenhao ;
Majcherczyk, Nathalie ;
Deshpande, Mohit ;
Qi, Xuewei ;
Zhao, Ding ;
Madhivanan, Rajasimman ;
Sen, Arnie .
2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2023), 2023, :7140-7146
[7]  
Dosovitskiy A., 2021, PROC ICLR
[8]   Benchmarking Bimanual Cloth Manipulation [J].
Garcia-Camacho, Irene ;
Lippi, Martina ;
Welle, Michael C. ;
Yin, Hang ;
Antonova, Rika ;
Varava, Anastasiia ;
Borras, Julia ;
Torras, Carme ;
Marino, Alessandro ;
Alenya, Guillem ;
Kragic, Danica .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2020, 5 (02) :1111-1118
[9]  
Hafner D, 2019, PR MACH LEARN RES, V97
[10]  
Hinton G., 2015, ARXIV