Learning optimal admission control in partially observable queueing networks

被引:0
作者
Anselmi, Jonatha [1 ]
Gaujal, Bruno [1 ]
Rebuffi, Louis-Sebastien [1 ]
机构
[1] Univ Grenoble Alpes, Inria, CNRS, Grenoble INP,LIG, F-38000 Grenoble, France
关键词
Product-form queueing networks; Norton's theorem; Admission control; Reinforcement learning; Regret;
D O I
10.1007/s11134-024-09917-y
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
We develop an efficient reinforcement learning algorithm that learns the optimal admission control policy in a partially observable queueing network. Specifically, only the arrival and departure times from the network are observable, optimality refers to the average holding/rejection cost in infinite horizon, and efficiency iswith respect to regret performance. While reinforcement learning in partially-observable Markov Decision Processes (MDP) is prohibitively expensive in general, we show that the regret at time T induced by our algorithm is (O) over tilde(root T log(1/rho)) where rho is an element of(0, 1) is connected to the mixing time of the underlying MDP. In contrast with existing regret bounds, ours does not depend on the diameter (D) of the underlying MDP, which in most queueing systems is at least exponential in S, i.e., the maximal number of jobs in the network. Instead, the role of the diameter is played by the log(1/rho) term, which may depend on S but we find that such dependence is "minimal". In the case of acyclic or hyperstable queueing networks, we prove that log(1/rho) = O(S), which overall provides a regret bound of the order of (O) over tilde(root TS). In the general case, numerical simulations support the claim that the term log(1/rho) remains extremely small compared to the diameter. The novelty of our approach is to leverage Norton's theorem for queueing networks and an efficient reinforcement learning algorithm for MDPs with the structure of birth-and-death processes.
引用
收藏
页码:31 / 79
页数:49
相关论文
共 38 条
[1]  
[Anonymous], 2022, CONFIGURING CONCURRE
[2]  
[Anonymous], 2016, Conference on Learning Theory, PMLR, P193
[3]  
Anselmi J, 2022, ADV NEUR IN
[4]   Efficiency of simulation in monotone hyper-stable queueing networks [J].
Anselmi, Jonatha ;
Gaujal, Bruno .
QUEUEING SYSTEMS, 2014, 76 (01) :51-72
[5]  
Anselmi J, 2007, I S MOD ANAL SIM COM, P225
[6]  
AWS Architecture Center, 2022, US
[7]  
Azar MG, 2017, PR MACH LEARN RES, V70
[8]  
Bhandari Jalaj., 2018, COLT, P1691
[9]   THE OPTIMAL ADMISSION THRESHOLD IN OBSERVABLE QUEUES WITH STATE DEPENDENT PRICING [J].
Borgs, Christian ;
Chayes, Jennifer T. ;
Doroudi, Sherwin ;
Harchol-Balter, Mor ;
Xu, Kuang .
PROBABILITY IN THE ENGINEERING AND INFORMATIONAL SCIENCES, 2014, 28 (01) :101-119
[10]   PARAMETRIC ANALYSIS OF QUEUING NETWORKS [J].
CHANDY, KM ;
HERZOG, U ;
WOO, L .
IBM JOURNAL OF RESEARCH AND DEVELOPMENT, 1975, 19 (01) :36-42