Self-correcting Q-Learning

被引:0
作者
Zhu, Rong [1 ]
Rigotti, Mattia [2 ]
机构
[1] Fudan Univ, ISTBI, Shanghai, Peoples R China
[2] IBM Res AI, Yorktown Hts, NY USA
来源
THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE | 2021年 / 35卷
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The Q-learning algorithm is known to be affected by the maximization bias, i.e. the systematic overestimation of action values, an important issue that has recently received renewed attention. Double Q-learning has been proposed as an efficient algorithm to mitigate this bias. However, this comes at the price of an underestimation of action values, in addition to increased memory requirements and a slower convergence. In this paper, we introduce a new way to address the maximization bias in the form of a "self-correcting algorithm" for approximating the maximum of an expected value. Our method balances the overestimation of the single estimator used in conventional Q-learning and the underestimation of the double estimator used in Double Q-learning. Applying this strategy to Q-learning results in Self-correcting Q-learning. We show theoretically that this new algorithm enjoys the same convergence guarantees as Q-learning while being more accurate. Empirically, it performs better than Double Q-learning in domains with rewards of high variance, and it even attains faster convergence than Q-learning in domains with rewards of zero or low variance. These advantages transfer to a Deep Q Network implementation that we call Self-correcting DQN and which outperforms regular DQN and Double DQN on several tasks in the Atari 2600 domain.
引用
收藏
页码:11185 / 11192
页数:8
相关论文
共 25 条
[1]  
Anschel O., 2017, P 34 INT C MACH LEAR
[2]  
Azar M. G., 2011, ADV NEURAL INFORM PR
[3]   The Arcade Learning Environment: An Evaluation Platform for General Agents [J].
Bellemare, Marc G. ;
Naddaf, Yavar ;
Veness, Joel ;
Bowling, Michael .
JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2013, 47 :253-279
[4]  
D'Eramo C, 2016, PR MACH LEARN RES, V48
[5]  
Dorka N., 2019, DYNAMICALLY BALANCED
[6]  
Ernst D, 2005, J MACH LEARN RES, V6, P503
[7]  
Fujimoto S, 2018, PR MACH LEARN RES, V80
[8]   A 1.2-2.8 GHz Tunable Low-noise Amplifier with 0.8-1.6 dB Noise Figure [J].
Gao, Hao ;
Song, Zhe ;
Chen, Zhe ;
Leenaerts, Domine M. W. ;
Baltus, Peter G. M. .
2019 IEEE RADIO FREQUENCY INTEGRATED CIRCUITS SYMPOSIUM (RFIC), 2019, :3-6
[9]  
Hasselt H., 2010, ADV NEURAL INFORM PR, V23, DOI DOI 10.5555/2997046.2997187
[10]   ON THE CONVERGENCE OF STOCHASTIC ITERATIVE DYNAMIC-PROGRAMMING ALGORITHMS [J].
JAAKKOLA, T ;
JORDAN, MI ;
SINGH, SP .
NEURAL COMPUTATION, 1994, 6 (06) :1185-1201