Sample-efficient Adversarial Imitation Learning

被引:0
作者
Jung, Dahuin [1 ]
Lee, Hyungyu [1 ]
Yoon, Sungroh [2 ]
机构
[1] Seoul Natl Univ, Elect & Comp Engn, Seoul 08826, South Korea
[2] Seoul Natl Univ, Elect & Comp Engn, Interdisciplinary Program Artificial Intelligence, Seoul 08826, South Korea
基金
新加坡国家研究基金会;
关键词
imitation learning; adversarial imitation learning; self-sup ervised learning; data efficiency;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Imitation learning, in which learning is performed by demonstration, has been studied and advanced for sequential decision-making tasks in which a reward function is not predefined. However, imitation learning methods still require numerous expert demonstration samples to successfully imitate an expert's behavior. To improve sample efficiency, we utilize self-supervised representation learning, which can generate vast training signals from the given data. In this study, we propose a self -sup ervised representation-based adversarial imitation learning method to learn state and action representations that are robust to diverse distortions and temporally predictive, on non -image control tasks. In particular, in comparison with existing self -sup ervised learning methods for tabular data, we propose a different corruption method for state and action representations that is robust to diverse distortions. We theoretically and empirically observe that making an informative feature manifold with less sample complexity significantly improves the performance of imitation learning. The proposed method shows a 39% relative improvement over existing adversarial imitation learning methods on MuJoCo in a setting limited to 100 expert state -action pairs. Moreover, we conduct comprehensive ablations and additional experiments using demonstrations with varying optimality to provide insights into a range of factors.
引用
收藏
页码:1 / 32
页数:32
相关论文
共 79 条
[1]  
Abbeel P., 2004, P 21 INT C MACHINE L, DOI [10.1007/978-0-387-30164-8_417, DOI 10.1007/978-0-387-30164-8_417]
[2]  
Adams J, 2021, ADV NEUR IN, V34
[3]  
Agarwal Rishabh, 2021, Advances in Neural Information Processing Systems, V34
[4]  
[Anonymous], 2020, ADV NEUR IN
[5]   A survey of inverse reinforcement learning: Challenges, methods and progress [J].
Arora, Saurabh ;
Doshi, Prashant .
ARTIFICIAL INTELLIGENCE, 2021, 297 (297)
[6]  
Bahri Dara., 2021, arXiv
[7]  
Barde Paul, 2020, NEURIPS
[8]  
Bardes A, 2022, Arxiv, DOI [arXiv:2105.04906, DOI 10.48550/ARXIV.2105.04906]
[9]   SIRL: Similarity-based Implicit Representation Learning [J].
Bobu, Andreea ;
Liu, Yi ;
Shah, Rohin ;
Brown, Daniel S. ;
Dragan, Anca D. .
PROCEEDINGS OF THE 2023 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, HRI 2023, 2023, :565-574
[10]   LOF: Identifying density-based local outliers [J].
Breunig, MM ;
Kriegel, HP ;
Ng, RT ;
Sander, J .
SIGMOD RECORD, 2000, 29 (02) :93-104