Improving Reinforcement Learning Exploration by Autoencoders

被引:0
|
作者
Paczolay, Gabor [1 ]
Harmati, Istvan [1 ]
机构
[1] Department of Control Engineering, Budapest University of Technology and Economics, Magyar Tudósok körútja 2., Budapest
来源
Periodica Polytechnica Electrical Engineering and Computer Science | 2024年 / 68卷 / 04期
关键词
AutE-DQN; autoencoders; DQN; exploration; reinforcement learning;
D O I
10.3311/PPee.36789
中图分类号
学科分类号
摘要
Reinforcement learning is a field with massive potential related to solving engineering problems without field knowledge. However, the problem of exploration and exploitation emerges when one tries to balance a system between the learning phase and proper execution. In this paper, a new method is proposed that utilizes autoencoders to manage the exploration rate in an epsilon-greedy exploration algorithm. The error between the real state and the reconstructed state by the autoencoder becomes the base of the exploration-exploitation rate. The proposed method is then examined in two experiments: one benchmark is the cartpole experiment while the other is a gridworld example created for this paper to examine long-term exploration. Both experiments show results such that the proposed method performs better in these scenarios. © 2024 Budapest University of Technology and Economics. All rights reserved.
引用
收藏
页码:335 / 343
页数:8
相关论文
共 50 条
  • [31] Deep Reinforcement Learning-Based 3D Exploration with a Wall Climbing Robot
    Das, Arya
    Halder, Raju
    Thakur, Atul
    2021 IEEE REGION 10 CONFERENCE (TENCON 2021), 2021, : 863 - 868
  • [32] Self-Attention-Based Temporary Curiosity in Reinforcement Learning Exploration
    Hu, Hangkai
    Song, Shiji
    Huang, Gao
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2021, 51 (09): : 5773 - 5784
  • [33] REINFORCEMENT LEARNING WITH SAFE EXPLORATION FOR NETWORK SECURITY
    Dai, Canhuang
    Xiao, Liang
    Wan, Xiaoyue
    Chen, Ye
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 3057 - 3061
  • [34] Interactions between motor exploration and reinforcement learning
    Uehara, Shintaro
    Mawase, Firas
    Therrien, Amanda S.
    Cherry-Allen, Kendra M.
    Celnik, Pablo
    JOURNAL OF NEUROPHYSIOLOGY, 2019, 122 (02) : 797 - 808
  • [35] Safe Exploration Techniques for Reinforcement Learning - An Overview
    Pecka, Martin
    Svoboda, Tomas
    MODELLING AND SIMULATION FOR AUTONOMOUS SYSTEMS, MESAS 2014, 2014, 8906 : 357 - 375
  • [36] Exploration of Reinforcement Learning to Play Snake Game
    Almalki, Ali Jaber
    Wocjan, Pawel
    2019 6TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE AND COMPUTATIONAL INTELLIGENCE (CSCI 2019), 2019, : 377 - 381
  • [37] Models for Autonomously Motivated Exploration in Reinforcement Learning
    Auer, Peter
    Lim, Shiau Hong
    Watkins, Chris
    DISCOVERY SCIENCE, 2011, 6926 : 29 - 29
  • [38] The role of the basal ganglia in exploration in a neural model based on reinforcement learning
    Sridharan, D.
    Prashanth, P. S.
    Chakravarthy, V. S.
    INTERNATIONAL JOURNAL OF NEURAL SYSTEMS, 2006, 16 (02) : 111 - 124
  • [39] BALANCING EXPLORATION AND EXPLOITATION IN REINFORCEMENT LEARNING USING A VALUE OF INFORMATION CRITERION
    Sledge, Isaac J.
    Principe, Jose C.
    2017 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2017, : 2816 - 2820
  • [40] Fast and slow curiosity for high-level exploration in reinforcement learning
    Nicolas Bougie
    Ryutaro Ichise
    Applied Intelligence, 2021, 51 : 1086 - 1107