Versatile Dueling Bandits: Best-of-both World Analyses for Online Learning from Relative Preferences

被引:0
作者
Saha, Aadirupa [1 ]
Gaillard, Pierre [2 ]
机构
[1] Microsoft Res, New York, NY 10012 USA
[2] Univ Grenoble Alpes, INRIA, CNRS, Grenoble INP,LJK, F-38000 Grenoble, France
来源
INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162 | 2022年
关键词
RANKING;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We study the problem of K-armed dueling bandit for both stochastic and adversarial environments, where the goal of the learner is to aggregate information through relative preferences of pair of decision points queried in an online sequential manner. We first propose a novel reduction from any (general) dueling bandits to multi-armed bandits which allows us to improve many existing results in dueling bandits. In particular, we give the first best-of-both world result for the dueling bandits regret minimization problem-a unified framework that is guaranteed to perform optimally for both stochastic and adversarial preferences simultaneously. Moreover, our algorithm is also the first to achieve an optimal O(Sigma(K)(i=1) log T/Delta(i)) regret bound against the Condorcet-winner benchmark, which scales optimally both in terms of the arm-size K and the instance-specific suboptimality gaps {Delta(i)}(i=1)(K). This resolves the long standing problem of designing an instancewise gap-dependent order optimal regret algorithm for dueling bandits (with matching lower bounds up to small constant factors). We further justify the robustness of our proposed algorithm by proving its optimal regret rate under adversarially corrupted preferences-this outperforms the existing state-of-the-art corrupted dueling results by a large margin. In summary, we believe our reduction idea will find a broader scope in solving a diverse class of dueling bandits setting, which are otherwise studied separately from multi-armed bandits with often more complex solutions and worse guarantees. The efficacy of our proposed algorithms are empirically corroborated against state-of-the-art dueling bandit methods.
引用
收藏
页码:19011 / 19026
页数:16
相关论文
共 75 条
[1]  
Agarwal A, 2021, ALGORITHMIC LEARNING, P217
[2]  
Ailon N, 2014, PR MACH LEARN RES, V32, P856
[3]   NONSTOCHASTIC MULTI-ARMED BANDITS WITH GRAPH-STRUCTURED FEEDBACK [J].
Alon, Noga ;
Cesa-Bianchi, Nicolo ;
Gentile, Claudio ;
Mannor, Shie ;
Mansour, Yishay ;
Shamir, Ohad .
SIAM JOURNAL ON COMPUTING, 2017, 46 (06) :1785-1826
[4]  
Alon Noga, 2015, C LEARNING THEORY, V40, P23
[5]  
Arora S., 2012, Theory of Computing, V8, P121, DOI 10.4086/toc.2012.v008a006
[6]  
Auer P, 2003, SIAM J COMPUT, V32, P48, DOI 10.1137/S0097539701398375
[7]   Finite-time analysis of the multiarmed bandit problem [J].
Auer, P ;
Cesa-Bianchi, N ;
Fischer, P .
MACHINE LEARNING, 2002, 47 (2-3) :235-256
[8]  
Auer Peter, 2009, Advances in neural information processing systems, P89
[9]  
Auer Peter, 2016, C LEARNING THEORY
[10]  
Bai Y., 2020, PR MACH LEARN RES, P551