Proximal Policy Optimization with Entropy Regularization

被引:0
作者
Shen, Yuqing [1 ]
机构
[1] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
来源
2024 4TH INTERNATIONAL CONFERENCE ON COMPUTER, CONTROL AND ROBOTICS, ICCCR 2024 | 2024年
关键词
reinforcement learning; policy gradient; entropy regularization;
D O I
10.1109/ICCCR61138.2024.10585473
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This study provides a revision to the Proximal Policy Optimization (PPO) algorithm, primarily aimed at improving the stability of PPO during the training process while maintaining a balance between exploration and exploitation. Recognizing the inherent challenge of achieving this balance in a complex environment, the proposed method adopts an entropy regularization technique similar to the one used in the Asynchronous Advantage Actor-Critic (A3C) algorithm. The main purpose of this design is to encourage exploration in the early stages, preventing the agent from prematurely converging to a sub-optimal policy. Detailed theoretical explanations of how the entropy term improves the robustness of the learning trajectory will be provided. Experimental results demonstrate that the revised PPO not only maintains the original strengths of the PPO algorithm, but also shows significant improvement in the stability of the training process. This work contributes to the ongoing research in reinforcement learning and offers a promising direction for future research on the adoption of PPO in environments with complicated dynamics.
引用
收藏
页码:380 / 383
页数:4
相关论文
共 13 条
[1]  
Brockman G, 2016, Arxiv, DOI arXiv:1606.01540
[2]   Deep Reinforcement Learning for Autonomous Driving: A Survey [J].
Kiran, B. Ravi ;
Sobh, Ibrahim ;
Talpaert, Victor ;
Mannion, Patrick ;
Al Sallab, Ahmad A. ;
Yogamani, Senthil ;
Perez, Patrick .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (06) :4909-4926
[3]   Reinforcement learning in robotics: A survey [J].
Kober, Jens ;
Bagnell, J. Andrew ;
Peters, Jan .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2013, 32 (11) :1238-1274
[4]  
Mnih V, 2013, Arxiv, DOI arXiv:1312.5602
[5]  
Mnih V, 2016, PR MACH LEARN RES, V48
[6]   Human-level control through deep reinforcement learning [J].
Mnih, Volodymyr ;
Kavukcuoglu, Koray ;
Silver, David ;
Rusu, Andrei A. ;
Veness, Joel ;
Bellemare, Marc G. ;
Graves, Alex ;
Riedmiller, Martin ;
Fidjeland, Andreas K. ;
Ostrovski, Georg ;
Petersen, Stig ;
Beattie, Charles ;
Sadik, Amir ;
Antonoglou, Ioannis ;
King, Helen ;
Kumaran, Dharshan ;
Wierstra, Daan ;
Legg, Shane ;
Hassabis, Demis .
NATURE, 2015, 518 (7540) :529-533
[7]  
Schulman J, 2018, Arxiv, DOI [arXiv:1506.02438, 10.48550/arXiv.1506.02438]
[8]  
Schulman J, 2017, Arxiv, DOI arXiv:1707.06347
[9]  
Schulman J, 2015, PR MACH LEARN RES, V37, P1889
[10]   A MATHEMATICAL THEORY OF COMMUNICATION [J].
SHANNON, CE .
BELL SYSTEM TECHNICAL JOURNAL, 1948, 27 (03) :379-423