Finding the optimal exploration-exploitation trade-off online through Bayesian risk estimation and minimization

被引:0
作者
Jamieson, Stewart [1 ,2 ,3 ,4 ]
How, Jonathan P. [3 ]
Girdhar, Yogesh [4 ]
机构
[1] MIT WHOI Joint Program Oceanog Appl Ocean Sci & E, Cambridge, MA 02139 USA
[2] MIT WHOI Joint Program Oceanog Appl Ocean Sci & E, Woods Hole, MA 02139 USA
[3] MIT, Dept Aeronaut & Astronaut, 77 Massachusetts Ave, Cambridge, MA 02139 USA
[4] Woods Hole Oceanog Inst, Dept Appl Ocean Phys & Engn, 266 Woods Hole Rd, Woods Hole, MA 02543 USA
基金
美国国家科学基金会;
关键词
Bayesian risk; Stochastic online learning; Multi-armed bandits; Partial monitoring; FINITE-TIME ANALYSIS; REGRET BOUNDS; REINFORCEMENT; ALGORITHM; BANDITS; POLICY;
D O I
10.1016/j.artint.2024.104096
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose endogenous Bayesian risk minimization (EBRM) over policy sets as an approach to online learning across a wide range of settings. Many real -world online learning problems have complexities such as action- and belief -dependent rewards, time -discounting of reward, and heterogeneous costs for actions and feedback; we find that existing online learning heuristics cannot leverage most problem -specific information, to the detriment of their performance. We introduce a belief -space Markov decision process (BMDP) model that can capture these complexities, and further apply the concepts of aleatoric, epistemic, and process risks to online learning. These risk functions describe the risk inherent to the learning problem, the risk due to the agent's lack of knowledge, and the relative quality of its policy, respectively. We demonstrate how computing and minimizing these risk functions guides the online learning agent towards the optimal exploration -exploitation trade-off in any stochastic online learning problem, constituting the basis of the EBRM approach. We also show how Bayes' risk, the minimization objective in stochastic online learning problems, can be decomposed into the aforementioned aleatoric, epistemic, and process risks. In simulation experiments, EBRM algorithms achieve state-of-the-art performance across various classical online learning problems, including Gaussian and Bernoulli multi -armed bandits, bestarm identification, mixed objectives with action- and belief -dependent rewards, and dynamic pricing, a finite partial monitoring problem. To our knowledge, it is also the first computationally efficient online learning approach that can provide online bounds on an algorithm's Bayes' risk. Finally, because the EBRM approach is parameterized by a set of policy algorithms, it can be extended to incorporate new developments in online learning algorithms, and is thus well -suited as the foundation for developing real -world learning agents.
引用
收藏
页数:39
相关论文
共 73 条
[1]   Seeking Human Help to Manage Plan Failure Risks in Semi-Autonomous Mobile Manipulation [J].
Al-Hussaini, Sarah ;
Dhanaraj, Neel ;
Gregory, Jason M. ;
Joseph, Rex Jomy ;
Thakar, Shantanu ;
Shah, Brual C. ;
Marvel, Jeremy A. ;
Gupta, Satyandra K. .
JOURNAL OF COMPUTING AND INFORMATION SCIENCE IN ENGINEERING, 2022, 22 (05)
[2]   Bridging POMDPs and Bayesian decision making for robust maintenance planning under model uncertainty: An application to railway systems [J].
Arcieri, Giacomo ;
Hoelzl, Cyprien ;
Schwery, Oliver ;
Straub, Daniel ;
Papakonstantinou, Konstantinos G. ;
Chatzi, Eleni .
RELIABILITY ENGINEERING & SYSTEM SAFETY, 2023, 239
[3]  
Audibert JY, 2010, J MACH LEARN RES, V11, P2785
[4]   Finite-time analysis of the multiarmed bandit problem [J].
Auer, P ;
Cesa-Bianchi, N ;
Fischer, P .
MACHINE LEARNING, 2002, 47 (2-3) :235-256
[5]  
Aziz M, 2021, J MACH LEARN RES, V22
[6]  
Bartok G., 2012, P 29 INT C MACHINE L, P1
[7]   Partial Monitoring-Classification, Regret Bounds, and Algorithms [J].
Bartok, Gabor ;
Foster, Dean P. ;
Pal, David ;
Rakhlin, Alexander ;
Szepesvari, Csaba .
MATHEMATICS OF OPERATIONS RESEARCH, 2014, 39 (04) :967-997
[8]  
Bartok Gabor, 2011, P 24 ANN C LEARNING, V19, P133
[9]   A MARKOVIAN DECISION PROCESS [J].
BELLMAN, R .
JOURNAL OF MATHEMATICS AND MECHANICS, 1957, 6 (05) :679-684
[10]   R-MAX - A general polynomial time algorithm for near-optimal reinforcement learning [J].
Brafman, RI ;
Tennenholtz, M .
JOURNAL OF MACHINE LEARNING RESEARCH, 2003, 3 (02) :213-231