Adventurer: exploration with BiGAN for deep reinforcement learning

被引:0
作者
Liu, Yongshuai [1 ]
Liu, Xin [1 ]
机构
[1] Univ Calif Davis, Comp Sci, One Shields Ave, Davis, CA 95616 USA
基金
美国国家科学基金会;
关键词
Deep reinforcement learning; Uncertainty; Exploration; BiGAN;
D O I
10.1007/s10489-025-06600-4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent developments in deep reinforcement learning have been very successful in learning complex, previously intractable problems. Sample efficiency and local optimality, however, remain significant challenges. To address these challenges, novelty-driven exploration strategies have emerged and shown promising potential. Unfortunately, no single algorithm outperforms all others in all tasks and most of them struggle with tasks with high-dimensional and complex observations. In this work, we propose Adventurer, a novelty-driven exploration algorithm that is based on Bidirectional Generative Adversarial Networks (BiGAN), where BiGAN is trained to estimate state novelty. Intuitively, a generator that has been trained on the distribution of visited states should only be able to generate a state coming from the distribution of visited states. As a result, novel states using the generator to reconstruct input states from certain latent representations would lead to larger reconstruction errors. We show that BiGAN performs well in estimating state novelty for complex observations. This novelty estimation method can be combined with intrinsic-reward-based exploration. Our empirical results show that Adventurer produces competitive results on a range of popular benchmark tasks, including continuous robotic manipulation tasks (e.g. Mujoco robotics) and high-dimensional image-based tasks (e.g. Atari games).
引用
收藏
页数:13
相关论文
共 32 条
[1]  
Asperti A, 2021, SN Computer Science, V2, DOI [10.1007/s42979-021-00702-9, 10.1007/s42979-021-00702-9, DOI 10.1007/S42979-021-00702-9]
[2]  
Badia A.P., 2020, ICLR
[3]  
Badia AP, 2020, PR MACH LEARN RES, V119
[4]  
Bellemare MG, 2016, ADV NEUR IN, V29
[5]  
Bellemare MG, 2014, PR MACH LEARN RES, V32, P1458
[6]   Improving Unsupervised Defect Segmentation by Applying Structural Similarity to Autoencoders [J].
Bergmann, Paul ;
Loewe, Sindy ;
Fauser, Michael ;
Sattlegger, David ;
Steger, Carsten .
PROCEEDINGS OF THE 14TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISAPP), VOL 5, 2019, :372-380
[7]   Focus on Impact: Indoor Exploration With Intrinsic Motivation [J].
Bigazzi, Roberto ;
Landi, Federico ;
Cascianelli, Silvia ;
Baraldi, Lorenzo ;
Cornia, Marcella ;
Cucchiara, Rita .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (02) :2985-2992
[8]  
Burda Yuri, 2019, INT C LEARN REPR
[9]   Diversity-augmented intrinsic motivation for deep reinforcement learning [J].
Dai, Tianhong ;
Du, Yali ;
Fang, Meng ;
Bharath, Anil Anthony .
NEUROCOMPUTING, 2022, 468 :396-406
[10]  
Dhariwal Prafulla., 2017, GitHub repository