Overview of Deep Reinforcement Learning Improvements and Applications

被引:15
作者
Zhang, Junjie [1 ]
Zhang, Cong [1 ]
Chien, Wei-Che [2 ]
机构
[1] Wuhan Polytech Univ, Sch Math & Comp Sci, Wuhan, Peoples R China
[2] Natl Dong Hwa Univ, Dept Comp Sci & Informat Engn, Shoufeng Township, Hualien County, Taiwan
来源
JOURNAL OF INTERNET TECHNOLOGY | 2021年 / 22卷 / 02期
基金
中国国家自然科学基金;
关键词
Deep reinforcement learning; Value function; Policy gradient; Sparse reward; NETWORK;
D O I
10.3966/160792642021032202002
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The deep reinforcement learning value has received a lot of attention from researchers since it was proposed. It combines the data representation capability of deep learning and the self-learning capability of reinforcement learning to give agents the ability to make direct action decisions on raw data. Deep reinforcement learning continuously optimizes the control strategy by using value function approximation and strategy search methods, ultimately resulting in an agent with a higher level of understanding of the target task. This paper provides a systematic description and summary of the corresponding improvements of these two types of classical method machines. First, this paper briefly describes the basic algorithms of classical deep reinforcement learning, including the Monte Carlo algorithm, the Q-Learning algorithm, and the most primitive deep Q network. Then the machine improvement method of deep reinforcement learning method based on value function and strategy gradient is introduced. And then the applications of deep reinforcement learning in robot control, algorithm parameter optimization and other fields are outlined. Finally, the future of deep reinforcement learning is envisioned based on the current limitations of deep reinforcement learning.
引用
收藏
页码:239 / 255
页数:17
相关论文
共 62 条
[1]  
Abdolmaleki A., 2018, Maximum a posteriori policy optimisation
[2]  
Andrychowicz M, 2016, ADV NEUR IN, V29
[3]  
[Anonymous], 2009, P 26 ANN INT C MACH
[4]  
[Anonymous], 2002, Advances in neural information processing systems
[5]  
Anschel O, 2017, 34 INT C MACHINE LEA, V70
[6]   Deep Reinforcement Learning A brief survey [J].
Arulkumaran, Kai ;
Deisenroth, Marc Peter ;
Brundage, Miles ;
Bharath, Anil Anthony .
IEEE SIGNAL PROCESSING MAGAZINE, 2017, 34 (06) :26-38
[7]  
Babaeizadeh M., 2017, Reinforcement learning through asynchronous advantage actor-critic on a gpu
[8]   Natural actor-critic algorithms [J].
Bhatnagar, Shalabh ;
Sutton, Richard S. ;
Ghavamzadeh, Mohammad ;
Lee, Mark .
AUTOMATICA, 2009, 45 (11) :2471-2482
[9]   Active Object Localization with Deep Reinforcement Learning [J].
Caicedo, Juan C. ;
Lazebnik, Svetlana .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :2488-2496
[10]  
Degris T, 2012, P AMER CONTR CONF, P2177