Relative Entropy of Correct Proximal Policy Optimization Algorithms with Modified Penalty Factor in Complex Environment

被引:6
作者
Chen, Weimin [1 ]
Wong, Kelvin Kian Loong [1 ]
Long, Sifan [2 ,3 ]
Sun, Zhili [4 ]
机构
[1] Hunan City Univ, Sch Informat & Elect, Yiyang 413000, Peoples R China
[2] Cent South Univ, Sch Comp Sci & Engn, Changsha 410075, Peoples R China
[3] Natl Univ Def Technol, Sch Comp Sci, Changsha 410073, Peoples R China
[4] Univ Surrey, 5G & 6G Innovat Ctr, Inst Commun Syst, Dept Elect & Elect Engn, Guildford GU2 7XH, Surrey, England
基金
中国国家自然科学基金;
关键词
correct proximal policy optimization; approximation theory; reinforcement learning; optimization; policy gradient; entropy; DIVERGENCE;
D O I
10.3390/e24040440
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
In the field of reinforcement learning, we propose a Correct Proximal Policy Optimization (CPPO) algorithm based on the modified penalty factor beta and relative entropy in order to solve the robustness and stationarity of traditional algorithms. Firstly, In the process of reinforcement learning, this paper establishes a strategy evaluation mechanism through the policy distribution function. Secondly, the state space function is quantified by introducing entropy, whereby the approximation policy is used to approximate the real policy distribution, and the kernel function estimation and calculation of relative entropy is used to fit the reward function based on complex problem. Finally, through the comparative analysis on the classic test cases, we demonstrated that our proposed algorithm is effective, has a faster convergence speed and better performance than the traditional PPO algorithm, and the measure of the relative entropy can show the differences. In addition, it can more efficiently use the information of complex environment to learn policies. At the same time, not only can our paper explain the rationality of the policy distribution theory, the proposed framework can also balance between iteration steps, computational complexity and convergence speed, and we also introduced an effective measure of performance using the relative entropy concept.
引用
收藏
页数:14
相关论文
共 9 条
  • [1] Proximal Policy Optimization with Entropy Regularization
    Shen, Yuqing
    2024 4TH INTERNATIONAL CONFERENCE ON COMPUTER, CONTROL AND ROBOTICS, ICCCR 2024, 2024, : 380 - 383
  • [2] Entropy adjustment by interpolation for exploration in Proximal Policy Optimization (PPO)
    Boudlal, Ayoub
    Khafaji, Abderahim
    Elabbadi, Jamal
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 133
  • [3] Proximal Policy Optimization with Relative Pearson Divergence
    Kobayashi, Taisuke
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 8416 - 8421
  • [4] Proximal policy optimization with adaptive threshold for symmetric relative density ratio
    Kobayashi, Taisuke
    RESULTS IN CONTROL AND OPTIMIZATION, 2023, 10
  • [5] Optimization of cobalt oxalate synthesis process based on modified proximal policy optimization algorithm
    Jia R.-D.
    Ning W.-B.
    He D.-K.
    Chu F.
    Wang F.-L.
    Kongzhi yu Juece/Control and Decision, 2023, 38 (11): : 3075 - 3082
  • [6] Guided Proximal Policy Optimization with Structured Action Graph for Complex Decision-making
    Yang, Yiming
    Xing, Dengpeng
    Xia, Wannian
    Wang, Peng
    MACHINE INTELLIGENCE RESEARCH, 2025,
  • [7] Adaptive modified artificial bee colony algorithms (AMABC) for optimization of complex systems
    Korkmaz Tan, Rabia
    Bora, Sebnem
    TURKISH JOURNAL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCES, 2020, 28 (05) : 2602 - 2629
  • [8] Model-based policy optimization algorithms for feedback control of complex dynamic systems
    Yerimah, Lucky E.
    Jorgensen, Christian
    Bequette, B. Wayne
    COMPUTERS & CHEMICAL ENGINEERING, 2025, 195
  • [9] Efficient Difficulty Level Balancing in Match-3 Puzzle Games: A Comparative Study of Proximal Policy Optimization and Soft Actor-Critic Algorithms
    Kim, Byounggwon
    Kim, Jungyoon
    ELECTRONICS, 2023, 12 (21)