An algorithm that excavates suboptimal states and improves Q-learning

被引:0
|
作者
Zhu, Canxin [1 ,2 ]
Yang, Jingmin [1 ,2 ]
Zhang, Wenjie [1 ,2 ]
Zheng, Yifeng [1 ,2 ]
机构
[1] Minnan Normal Univ, Sch Comp Sci, Zhangzhou 363000, Peoples R China
[2] Fuzhou Univ, Affiliated Prov Hosp, Fuzhou 363000, Fujian, Peoples R China
来源
ENGINEERING RESEARCH EXPRESS | 2024年 / 6卷 / 04期
关键词
reinforcement learning; exploration and exploitation; markov decision process; suboptimal state;
D O I
10.1088/2631-8695/ad8dae
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Reinforcement learning is inspired by the trial-and-error method in animal learning, where the reward values obtained from the interaction of the agent with the environment are used as feedback signals to train the agent. Reinforcement learning has attracted extensive attention in recent years. It is mainly used to solve sequential decision-making problems and has been applied to various aspects of life, such as autonomous driving, game gaming, and robotics. Exploration and exploitation are the main characteristics that distinguish reinforcement learning methods from other learning methods. Reinforcement learning methods need reward optimization algorithms to better balance exploration and exploitation. Aiming at the problems of unbalanced exploration and a large number of repeated explorations in the Q-learning algorithm in the MDP environment, an algorithm that excavates suboptimal states and improves Q-learning was proposed. It adopts the exploration idea of 'exploring the potential of the second-best', and explores the state with suboptimal state value, and calculates the exploration probability value according to the distance between the current state and the goal state. The larger the distance, the higher the exploration demand of the agent. In addition, only the immediate reward and the maximum action value of the next state are needed to calculate the Q value. Through the simulation experiments in two different MDP environments, The frozenLake8x8 environment and the CliffWalking environment, the results verify that the proposed algorithm obtains the highest average cumulative reward and the least total time consumption
引用
收藏
页数:18
相关论文
共 50 条
  • [1] Backward Q-learning: The combination of Sarsa algorithm and Q-learning
    Wang, Yin-Hao
    Li, Tzuu-Hseng S.
    Lin, Chih-Jui
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2013, 26 (09) : 2184 - 2193
  • [2] ENHANCEMENTS OF FUZZY Q-LEARNING ALGORITHM
    Glowaty, Grzegorz
    COMPUTER SCIENCE-AGH, 2005, 7 : 77 - 87
  • [3] An analysis of the pheromone Q-learning algorithm
    Monekosso, N
    Remagnino, P
    ADVANCES IN ARTIFICIAL INTELLIGENCE - IBERAMIA 2002, PROCEEDINGS, 2002, 2527 : 224 - 232
  • [4] A Weighted Smooth Q-Learning Algorithm
    Vijesh, V. Antony
    Shreyas, S. R.
    IEEE CONTROL SYSTEMS LETTERS, 2025, 9 : 21 - 26
  • [5] An improved immune Q-learning algorithm
    Ji, Zhengqiao
    Wu, Q. M. Jonathan
    Sid-Ahmed, Maher
    2007 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS, VOLS 1-8, 2007, : 3330 - +
  • [6] Exponential Moving Average Q-Learning Algorithm
    Awheda, Mostafa D.
    Schwartz, Howard M.
    PROCEEDINGS OF THE 2013 IEEE SYMPOSIUM ON ADAPTIVE DYNAMIC PROGRAMMING AND REINFORCEMENT LEARNING (ADPRL), 2013, : 31 - 38
  • [7] Trading ETFs with Deep Q-Learning Algorithm
    Hong, Shao-Yan
    Liu, Chien-Hung
    Chen, Woei-Kae
    You, Shingchern D.
    2020 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS - TAIWAN (ICCE-TAIWAN), 2020,
  • [8] Generating Test Cases for Q-Learning Algorithm
    Kumaresan, Lavanya
    Chamundeswari, A.
    2013 FOURTH INTERNATIONAL CONFERENCE ON COMPUTING, COMMUNICATIONS AND NETWORKING TECHNOLOGIES (ICCCNT), 2013,
  • [9] Q-learning algorithm for optimal multilevel thresholding
    Yin, PY
    IC-AI'2001: PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOLS I-III, 2001, : 335 - 340
  • [10] An ARM-based Q-learning algorithm
    Hsu, Yuan-Pao
    Hwang, Kao-Shing
    Lin, Hsin-Yi
    ADVANCED INTELLIGENT COMPUTING THEORIES AND APPLICATIONS: WITH ASPECTS OF CONTEMPORARY INTELLIGENT COMPUTING TECHNIQUES, 2007, 2 : 11 - +