Fast Rates for Nonparametric Online Learning: From Realizability to Learning in Games

被引:15
作者
Daskalakis, Constantinos [1 ]
Golowich, Noah [1 ]
机构
[1] MIT, CSAIL, Cambridge, MA 02139 USA
来源
PROCEEDINGS OF THE 54TH ANNUAL ACM SIGACT SYMPOSIUM ON THEORY OF COMPUTING (STOC '22) | 2022年
关键词
Learning in games; proper online learning; fast rates; Littlestone dimension; sequential fat-shattering dimension;
D O I
10.1145/3519935.3519950
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
We study fast rates of convergence in the setting of nonparametric online regression, namely where regret is defined with respect to an arbitrary function class which has bounded complexity. Our contributions are two-fold: (1) In the realizable setting of nonparametric online regression with the absolute loss, we propose a randomized proper learning algorithm which gets a near-optimal cumulative loss in terms of the sequential fat-shattering dimension of the hypothesis class. In the setting of online classification with a class of Littlestone dimension d, our bound reduces to d center dot poly logT. This result answers a question as to whether proper learners could achieve near-optimal cumulative loss; previously, even for online classification, the best known cumulative loss was O (root dT). Further, for the real-valued (regression) setting, a cumulative loss bound with near-optimal scaling on sequential fat-shattering dimension was not even known for improper learners, prior to this work. (2) Using the above result, we exhibit an independent learning algorithm for general-sum binary games of Littlestone dimension.., for which each player achieves regret O (d(3/4) center dot T-1/4). This result generalizes analogous results of Syrgkanis et al. (2015) who showed that in finite games the optimal regret can be accelerated from O (root T) in the adversarial setting to O (T-1/4) in the game setting. To establish the above results, we introduce several new techniques, including: a hierarchical aggregation rule to achieve the optimal cumulative loss for real-valued classes, a multi-scale extension of the proper online realizable learner of Hanneke et al. (2021), an approach to show that the output of such nonparametric learning algorithms is stable, and a proof that the minimax theorem holds in all online learnable games.
引用
收藏
页码:846 / 859
页数:14
相关论文
共 62 条
[1]   Online learning of quantum states [J].
Aaronson, Scott ;
Chen, Xinyi ;
Hazan, Elad ;
Kale, Satyen ;
Nayak, Ashwin .
JOURNAL OF STATISTICAL MECHANICS-THEORY AND EXPERIMENT, 2019, 2019 (12)
[2]  
Alon N, 2019, Arxiv, DOI arXiv:1806.00949
[3]  
Alon Noga., 2021, arXiv
[4]  
Angluin D., 1988, Machine Learning, V2, P319, DOI 10.1023/A:1022821128753
[5]  
[Anonymous], 2006, PREDICTION LEARNING
[6]  
[Anonymous], 2018, P 31 C LEARNING THEO
[7]  
[Anonymous], 2014, P 27 C LEARNING THEO
[8]   Local Rademacher complexities [J].
Bartlett, PL ;
Bousquet, O ;
Mendelson, S .
ANNALS OF STATISTICS, 2005, 33 (04) :1497-1537
[9]   Mirror descent and nonlinear projected subgradient methods for convex optimization [J].
Beck, A ;
Teboulle, M .
OPERATIONS RESEARCH LETTERS, 2003, 31 (03) :167-175
[10]  
Ben-David Shai, 2009, P 22 ANN C LEARN THE, P1