Choice of discount rate in reinforcement learning with long-delay rewards

被引:0
|
作者
LIN Xiangyang [1 ]
XING Qinghua [1 ]
LIU Fuxian [1 ]
机构
[1] Department of Air Defense and Anti-Missile, Air Force Engineering University
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP181 [自动推理、机器学习];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the world, most of the successes are results of longterm efforts. The reward of success is extremely high, but before that, a long-term investment process is required. People who are “myopic” only value short-term rewards and are unwilling to make early-stage investments, so they hardly get the ultimate success and the corresponding high rewards. Similarly, for a reinforcement learning(RL) model with long-delay rewards, the discount rate determines the strength of agent’s “farsightedness”.In order to enable the trained agent to make a chain of correct choices and succeed finally, the feasible region of the discount rate is obtained through mathematical derivation in this paper firstly. It satisfies the “farsightedness” requirement of agent. Afterwards, in order to avoid the complicated problem of solving implicit equations in the process of choosing feasible solutions,a simple method is explored and verified by theoreti cal demonstration and mathematical experiments. Then, a series of RL experiments are designed and implemented to verify the validity of theory. Finally, the model is extended from the finite process to the infinite process. The validity of the extended model is verified by theories and experiments. The whole research not only reveals the significance of the discount rate, but also provides a theoretical basis as well as a practical method for the choice of discount rate in future researches.
引用
收藏
页码:381 / 392
页数:12
相关论文
共 50 条
  • [41] Reversal of long-delay conditioned taste aversion learning in rats by sex hormone manipulation
    Michael R. Foy
    Judith G. Foy
    Integrative Physiological & Behavioral Science, 2003, 38 : 203 - 213
  • [42] WHICH DTN CLP IS BEST FOR LONG-DELAY CISLUNAR COMMUNICATIONS WITH CHANNEL-RATE ASYMMETRY?
    Wang, Ruhai
    Wei, Zhiguo
    Dave, Vivek
    Ren, Bin
    Zhang, Qinyu
    Hou, Jia
    Zhou, Liulei
    IEEE WIRELESS COMMUNICATIONS, 2011, 18 (06) : 10 - 16
  • [43] Discount Factor as a Regularizer in Reinforcement Learning
    Amit, Ron
    Meir, Ron
    Ciosek, Kamil
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [44] Classic conditioning in aged rabbits: Delay, trace, and long-delay conditioning
    Solomon, PR
    GrocciaEllison, M
    BEHAVIORAL NEUROSCIENCE, 1996, 110 (03) : 427 - 435
  • [45] Trace and long-delay fear conditioning in the developing rat
    Robert C. Barnet
    Pamela S. Hunt
    Learning & Behavior, 2005, 33 : 437 - 443
  • [46] LONG-DELAY RC TIMER WITHSTANDS NOISY ENVIRONMENTS
    MILLER, MJ
    ELECTRONIC DESIGN, 1981, 29 (16) : 212 - &
  • [47] Online learning of shaping rewards in reinforcement learning
    Grzes, Marek
    Kudenko, Daniel
    NEURAL NETWORKS, 2010, 23 (04) : 541 - 550
  • [48] Learning Intrinsic Symbolic Rewards in Reinforcement Learning
    Sheikh, Hassam Ullah
    Khadka, Shauharda
    Miret, Santiago
    Majumdar, Somdeb
    Phielipp, Mariano
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [49] RELATIVE DELAY OF REINFORCEMENT AND CHOICE
    HURSH, SR
    FANTINO, E
    JOURNAL OF THE EXPERIMENTAL ANALYSIS OF BEHAVIOR, 1973, 19 (03) : 437 - 450
  • [50] EFFECTS ON CHOICE OF REINFORCEMENT DELAY AND CONDITIONED REINFORCEMENT
    WILLIAMS, BA
    FANTINO, E
    JOURNAL OF THE EXPERIMENTAL ANALYSIS OF BEHAVIOR, 1978, 29 (01) : 77 - 86