Model-Free Non-Stationarity Detection and Adaptation in Reinforcement Learning

被引:7
作者
Canonaco, Giuseppe [1 ]
Restelli, Marcello [1 ]
Roveri, Manuel [1 ]
机构
[1] Politecn Milan, Milan, Italy
来源
ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE | 2020年 / 325卷
关键词
D O I
10.3233/FAIA200200
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In most Reinforcement Learning (RL) studies, the considered task is assumed to be stationary, i.e., it does not change its behavior or its characteristics over time, as this allows to generate all the convergence properties of RL techniques. Unfortunately, this assumption does not hold in real-world scenarios where systems and environments typically evolve over time. For instance, in robotic applications, sensor or actuator faults would induce a sudden change in the RL settings, while in financial applications the evolution of the market can cause a more gradual variation over time. In this paper, we present an adaptive RL algorithm able to detect changes in the environment or in the reward function and react to these changes by adapting to the new conditions of the task. At first, we develop a figure of merit onto which a hypothesis test can be applied to detect changes between two different learning iterations. Then, we extended this test to sequentially operate over time by means of the CUmulative SUM (CUSUM) approach. Finally, the proposed changedetection mechanism is combined (following an adaptive-active approach) with a well known RL algorithm to make it able to deal with non-stationary tasks. We tested the proposed algorithm on two well-known continuous-control tasks to check its effectiveness in terms of non-stationarity detection and adaptation over a vanilla RL algorithm.
引用
收藏
页码:1047 / 1054
页数:8
相关论文
共 29 条
[1]  
[Anonymous], 2016, CoRR abs/1606.01540
[2]  
[Anonymous], 1998, REINFORCEMENT LEARNI
[3]  
[Anonymous], 1993, DETECTION ABRUPT CHA
[4]   Infinite-horizon policy-gradient estimation [J].
Baxter, J ;
Bartlett, PL .
JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2001, 15 :319-350
[5]  
BURBEA J, 1984, UTILITAS MATHEMATICA, V26, P171
[6]  
Burda Yuri, 2017, ARXIV171003641
[7]  
Choi SPM, 2000, ADV NEUR IN, V12, P987
[8]  
Da Silva B. C., 2006, ACM INT C PROCEEDING, P217, DOI DOI 10.1145/1143844.1143872
[9]   Learning in Nonstationary Environments: A Survey [J].
Ditzler, Gregory ;
Roveri, Manuel ;
Alippi, Cesare ;
Polikar, Robi .
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE, 2015, 10 (04) :12-25
[10]  
Duan Y, 2016, PR MACH LEARN RES, V48