ε-First Policies for Budget-Limited Multi-Armed Bandits

被引:0
作者
Long Tran-Thanh [1 ]
Chapman, Archie [1 ]
de Cote, Enrique Munoz [1 ]
Rogers, Alex [1 ]
Jennings, Nicholas R. [1 ]
机构
[1] Univ Southampton, Sch Elect & Comp Sci, Southampton SO17 1BJ, Hants, England
来源
PROCEEDINGS OF THE TWENTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE (AAAI-10) | 2010年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We introduce the budget-limited multi-armed bandit (MAB), which captures situations where a learner's actions are costly and constrained by a fixed budget that is incommensurable with the rewards earned from the bandit machine, and then describe a first algorithm for solving it. Since the learner has a budget, the problem's duration is finite. Consequently an optimal exploitation policy is not to pull the optimal arm repeatedly, but to pull the combination of arms that maximises the agent's total reward within the budget. As such, the rewards for all arms must be estimated, because any of them may appear in the optimal combination. This difference from existing MABs means that new approaches to maximising the total reward are required. To this end, we propose an epsilon-first algorithm, in which the first epsilon of the budget is used solely to learn the arms' rewards (exploration), while the remaining 1-epsilon is used to maximise the received reward based on those estimates (exploitation). We derive bounds on the algorithm's loss for generic and uniform exploration methods, and compare its performance with traditional MAB algorithms under various distributions of rewards and costs, showing that it outperforms the others by up to 50%.
引用
收藏
页码:1211 / 1216
页数:6
相关论文
共 9 条
[1]   Unbounded knapsack problem: Dynamic programming revisited [J].
Andonov, R ;
Poirriez, V ;
Rajopadhye, S .
EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2000, 123 (02) :394-407
[2]  
Antos A, 2008, LECT NOTES ARTIF INT, V5254, P287, DOI 10.1007/978-3-540-87987-9_25
[3]   Finite-time analysis of the multiarmed bandit problem [J].
Auer, P ;
Cesa-Bianchi, N ;
Fischer, P .
MACHINE LEARNING, 2002, 47 (2-3) :235-256
[4]  
Bubeck S, 2009, LECT NOTES ARTIF INT, V5809, P23, DOI 10.1007/978-3-642-04414-4_7
[5]  
Cicirello VA, 2005, P 20 NAT C ART INT, V3, P1355
[6]   Approximation Algorithms for Budgeted Learning Problems [J].
Guha, Sudipto ;
Munagala, Kamesh .
STOC 07: PROCEEDINGS OF THE 39TH ANNUAL ACM SYMPOSIUM ON THEORY OF COMPUTING, 2007, :104-113
[7]   Average performance of greedy heuristics for the integer knapsack problem [J].
Kohli, R ;
Krishnamurti, R ;
Mirchandani, P .
EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2004, 154 (01) :36-45
[8]  
Padhy P., 2006, PROC AAMAS 06 HAKODA, P1353, DOI [10.1145/1160633.1160885, DOI 10.1145/1160633.1160885]
[9]   SOME ASPECTS OF THE SEQUENTIAL DESIGN OF EXPERIMENTS [J].
ROBBINS, H .
BULLETIN OF THE AMERICAN MATHEMATICAL SOCIETY, 1952, 58 (05) :527-535