Align-RUDDER: Learning From Few Demonstrations by Reward Redistribution

被引:0
作者
Patil, Vihang [1 ,2 ]
Hofmarcher, Markus [1 ,2 ]
Dinu, Marius-Constantin [1 ,2 ,3 ]
Dorfer, Matthias [4 ]
Blies, Patrick [4 ]
Brandstetter, Johannes [1 ,2 ,5 ]
Arjona-Medina, Jose [1 ,2 ,3 ]
Hochreiter, Sepp [1 ,2 ,6 ]
机构
[1] Johannes Kepler Univ Linz, Inst Machine Learning, ELLIS Unit Linz, Linz, Austria
[2] Johannes Kepler Univ Linz, Inst Machine Learning, LIT AI Lab, Linz, Austria
[3] Dynatrace Res, Linz, Austria
[4] EnliteAI, Vienna, Austria
[5] Microsoft Res, Redmond, WA USA
[6] Inst Adv Res Artificial Intelligence, Vienna, Austria
来源
INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162 | 2022年
基金
欧盟地平线“2020”;
关键词
MULTIPLE SEQUENCE ALIGNMENT; NEURAL-NETWORKS; ALGORITHM; SEARCH;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Reinforcement learning algorithms require many samples when solving complex hierarchical tasks with sparse and delayed rewards. For such complex tasks, the recently proposed RUDDER uses reward redistribution to leverage steps in the Q-function that are associated with accomplishing sub-tasks. However, often only few episodes with high rewards are available as demonstrations since current exploration strategies cannot discover them in reasonable time. In this work, we introduce Align-RUDDER, which utilizes a profile model for reward redistribution that is obtained from multiple sequence alignment of demonstrations. Consequently, Align-RUDDER employs reward redistribution effectively and, thereby, drastically improves learning on few demonstrations. Align-RUDDER outperforms competitors on complex artificial tasks with delayed rewards and few demonstrations. On the Minecraft ObtainDiamond task, Align-RUDDER is able to mine a diamond, though not frequently. Code is available at github.com/ml-jku/align-rudder.
引用
收藏
页数:42
相关论文
empty
未找到相关数据